Patents Examined by Wissam Rashid
  • Patent number: 11567801
    Abstract: Systems and methods scale an instance group of a computing platform by determining whether to scale up or down the instance group by using historical data from prior jobs wherein the historical data includes one or more of: a data set size used in a prior related job and a code version for a prior related job. The systems and methods also scale the instance group up or down based on the determination. In some examples, systems and methods scale an instance group of a computing platform by determining a job dependency tree for a plurality of related jobs, determining runtime data for each of the jobs in the dependency tree and scaling up or down the instance group based on the determined runtime data.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: January 31, 2023
    Assignee: Palantir Technologies Inc.
    Inventors: Ashray Jain, Ryan McNamara, Greg DeArment
  • Patent number: 11567796
    Abstract: As part of a container initialization procedure, a maximum number of hardware threads per processor core in a set of cores of a computer system are enabled, the container initialization procedure configuring an operating system executing on the computer system for container execution and configuring a first container for execution on the operating system. From a set of available cores in the set of cores, an execution core is selected. In the selected execution core, a number of threads per core to be used during execution of the first container is configured, the number of threads per core specified for the container initialization procedure by a first simultaneous multithreading (SMT) parameter. Using the configured execution core, the first container is executed, the executing virtualizing the operating system.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: January 31, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jeffrey W. Tenner, Joseph W. Cropper
  • Patent number: 11561866
    Abstract: A “backup services container” comprises “backup toolkits,” which include scripts for accessing containerized applications plus enabling utilities/environments for executing the scripts. The backup services container is added to Kubernetes pods comprising containerized applications without changing other pod containers. For maximum value and advantage, the backup services container is “over-equipped” with toolkits. The backup services container selects and applies a suitable backup toolkit to a containerized application to ready it for a pending backup. Interoperability with a proprietary data storage management system provides features that are not possible with third-party backup systems. Some embodiments include one or more components of the proprietary data storage management within the illustrative backup services container. Some embodiments include one or more components of the proprietary data storage management system in a backup services pod configured in a Kubernetes node.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: January 24, 2023
    Assignee: Commvault Systems, Inc.
    Inventors: Amit Mitkar, Sumedh Pramod Degaonkar, Sanjay Kumar, Shankarbabu Bhavanarushi, Vikash Kumar
  • Patent number: 11561826
    Abstract: Scheduling work of a machine learning application includes instantiating kernel objects by a computer processor in response to input of kernel definitions. Each kernel object is of a kernel type indicating a compute circuit. The computer processor generates a graph in a memory. Each node represents a task and specifies an assignment of the task to one or more of the kernel objects, and each edge represents a data dependency. Task queues are created in the memory and assigned to queue tasks represented by the nodes. Kernel objects are assigned to the task queues, and the tasks are enqueued by threads executing the kernel objects, based on assignments of the kernel objects to the task queues and assignments of the tasks to the kernel objects. Tasks are dequeued by the threads, and the compute circuits are activated to initiate processing of the dequeued tasks.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: January 24, 2023
    Assignee: XILINX, INC.
    Inventors: Sumit Nagpal, Abid Karumannil, Vishal Jain, Arun Kumar Patil
  • Patent number: 11556384
    Abstract: This disclosure describes techniques for improving allocation of computing resources to computation of machine learning tasks, including on massive computing systems hosting machine learning models. A method includes a computing system, based on a computational metric trend and/or a predicted computational metric of a past task model, allocating a computing resource for computing of a machine learning task by a current task model prior to runtime of the current task model; computing the machine learning task by executing a copy of the current task model; quantifying a computational metric of the copy of the current task model; determining a computational metric trend based on the computational metric; deriving a predicted computational metric of the copy of the current task model based on the computational metric; and, based on the computational metric trend, changing allocation of a computing resource for computing of the machine learning task by the current task model.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: January 17, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Jerome Henry, Robert Edgar Barton
  • Patent number: 11537432
    Abstract: Embodiments of the present disclosure are directed to dynamic shadow operations configured to dynamically shadow data-plane resources in a network device. In some embodiments, the dynamic resource shadow operations are used to locally maintain a shadow copy of data plane resources to avoid having to read them through a bus interconnect. In other embodiments, the dynamic shadow framework is used to provide memory protection for hardware resources against SEU failures. The dynamic shadow framework may operate in conjunction with adaptive memory scrubbing operations. In other embodiments, the dynamic shadow infrastructure is used to facilitate fast boot-up and fast upgrade operations.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: December 27, 2022
    Assignee: Cisco Technology, Inc.
    Inventors: Riaz Khan, Peter Geoffrey Jones
  • Patent number: 11526384
    Abstract: A computer-implemented method for balancing workload among one or more locations is disclosed. The method may comprise: receiving data associated with a workload forecast for a first location and a second location, the data comprising a number of orders expected to be received for the first and second locations for a predetermined period of time; determining a first set of ratios of workload forecast for the locations relative to a first sum of the workload forecast for the first and second locations, the first set of ratios comprising at least a first forecast ratio for the first location and a second forecast ratio for the second location; receiving electronic orders for the predetermined period of time, the electronic orders comprising one or more groups of items and being assigned to one of the locations; and reassigning a first subset of electronic orders for the first location to the second location.
    Type: Grant
    Filed: April 17, 2020
    Date of Patent: December 13, 2022
    Assignee: Coupang Corp.
    Inventors: Hyun Sik Eugene Minh, Jin Kwang Kim, Hyunjun Park, Christopher Carlson
  • Patent number: 11520621
    Abstract: An embodiment may involve server devices arranged into pods, each server device hosting computational instances, and a central computational instance configured to: (i) obtain per-pod lists of the instances hosted by the pods; (ii) determine a maximum number of the instances to arrange into batches; (iii) determine a group size for groups of the instances that are to be placed into the batches; (iv) execute a first phase that involves removing per-pod groups from the per-pod lists and adding them to the batches, until less of the instances than the group size remains in each of the per-pod lists; (v) execute a second phase that involves removing one of the instances from the per-pod lists and adding it to the batches, until none of the instances remains in any of the per-pod lists; and (vi) schedule one or more of the automations to take place in the data center.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: December 6, 2022
    Assignee: ServiceNow, Inc.
    Inventors: Khashayar Goudarzi, Wenhui Li, Sharath Vaddempudi, Kavish Jain, Shaoying Zou, Yerjan Khurmyetbyek, Swathi Pattapu
  • Patent number: 11513922
    Abstract: Aspects of the present disclosure enable data protection operations including differential and incremental backups by performing changed-block tracking in network or cloud computing systems with architectures that do not natively support changed-block tracking or do not expose changed-block tracking functionality to an information management system. In certain aspects, an identity of changed blocks may be obtained by using a hypervisor configured to interface with the cloud computing architecture. The identified changed blocks may be used to generate a map of the changed blocks. The maps of the changed blocks can be used by a virtual server agent to extract the changed blocks from a copy of a virtual machine disk and backed up to perform a differential or incremental backup.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: November 29, 2022
    Assignee: Commvault Systems, Inc.
    Inventors: Sanjay Kumar, Sumedh Pramod Degaonkar
  • Patent number: 11513845
    Abstract: Systems, apparatuses, and methods are disclosed for scheduling threads comprising of code blocks in a graph streaming processor (GSP) system. One system includes a scheduler for scheduling plurality of prefetch threads, main threads, invalidate threads. The plurality of prefetch threads includes prefetching data from main memory required for execution of the main threads of the next stage. The plurality of main threads includes a set of instructions operating on the graph streaming processors of GSP system. The plurality of the invalidate threads includes invalidating data location/s consumed by the plurality of the main threads of the previous stage. A portion of the scheduler is implemented in hardware.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: November 29, 2022
    Assignee: Blaize, Inc.
    Inventor: Satyaki Koneru
  • Patent number: 11494217
    Abstract: This disclosure describes systems, devices, and techniques for migrating virtualized resources from outdated hosts during requested reboots of the virtualized resources, in order to update the outdated hosts. In an example method, a pending reboot a virtualized resource occupying a first host can be identified. At least one component of the first host may be determined to be outdated. In response to identifying the pending reboot and determining that the at least one component is outdated, the virtualized resource may be migrated to a second host. The first host may update the at least one component.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: November 8, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Nikolay Krasilnikov, Alexey Gadalin, Rudresh Amin, John Edsel Santos
  • Patent number: 11487574
    Abstract: This disclosure generally relates to enabling a hypervisor of a host machine to provide virtual interrupts to select virtual processors or a set of virtual processors. More specifically, the present disclosure describes how interrupts may be provided to targeted virtual processors, regardless of where the virtual processors are currently executing. That is, when an interrupt is received, the interrupt may be delivered to a specified virtual processor regardless of which logical processor is currently hosting the virtual processor.
    Type: Grant
    Filed: January 19, 2018
    Date of Patent: November 1, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Aditya Bhandari, Bruce J. Sherwin, Jr., Xin David Zhang
  • Patent number: 11487589
    Abstract: Systems and methods are provided for implementing a self-adaptive batch dataset partitioning control process which is utilized in conjunction with a distributed deep learning model training process to optimize load balancing among a set of accelerator resources. An iterative batch size tuning process is configured to determine an optimal job partition ratio for partitioning mini-batch datasets into sub-batch datasets for processing by a set of hybrid accelerator resources, wherein the sub-batch datasets are partitioned into optimal batch sizes for processing by respective accelerator resources to minimize a time for completing the deep learning model training process.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: November 1, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Wei Cui, Sanping Li, Kun Wang
  • Patent number: 11481237
    Abstract: A system for executing a software program comprises: a display device for displaying a web based GUI of the software program; and a hardware processor adapted for executing in a web browser a code comprising: executing, in a worker thread that is not a primary thread executing code implementing the web based GUI, a client instruction identified in the primary thread for background processing; while the worker thread executes: displaying in a graphical object of the web based GUI data retrieved from a data structure associated with an outcome of executing the client instruction, where the data structure contains temporary data; and modifying another graphical object of the web based GUI in response to a user instruction received by a user selecting a selectable object of the web based GUI; and modifying the graphical object of the web based GUI when the contents of data structure is modified.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: October 25, 2022
    Assignee: monday.com Ltd.
    Inventors: Orr Gottlieb, Moshe Zemah, Omer Doron
  • Patent number: 11474860
    Abstract: Systems, methods, and other embodiments associated with branch prediction in workflows are described. In one embodiment, a method includes inputting a workflow and serially progressing through the workflow in a flow sequence and in response to the flow sequence encountering a first decision element in the workflow that includes a plurality of branch paths: (i) executing a prediction that predicts a resulting path of the first decision element to predict a first user interface from the plurality of user interfaces that may be encountered subsequently in the flow sequence as part of a first terminal element; and (ii) pre-building the first user interface that is predicted prior to encountering the first terminal element. In response to the flow sequence reaching the first terminal element, displaying the pre-built first user interface on a display device.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: October 18, 2022
    Assignee: Oracle International Corporation
    Inventors: Terrence A. Moltzan, Zachary M. Connelly, Jens O. Lundell, Aaron M. Schubert
  • Patent number: 11474831
    Abstract: The present disclosure relates to application startup control methods and control devices. One example method includes receiving information that is sent by a first application and that is used to trigger startup of a second application, determining, based on at least one of the information of the first application and a currently available resource amount of a system, whether to restrict the startup of the second application, where the information of the first application is used to indicate an importance degree of the first application in the system, and restricting the startup of the second application in response to determining to restrict the startup of the second application.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: October 18, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Huifeng Hu, Jiechun Li, Xiaodong Su
  • Patent number: 11467883
    Abstract: A system and method of reserving resources in a compute environment are disclosed. The method embodiment comprises receiving a request for resources within a computer environment, determining at least one completion time associated with at least one resource type required by the request, and reserving resources within the computer environment based on the determine of at least the completion time. A scaled wall clock time on a per resource basis may also be used to determine what resources to reserve. The system may determine whether to perform a start time analysis or a completion time analysis or a hybrid analysis in the process of generating a co-allocation map between a first type of resource and a second type of resource in preparation for reserving resources according to the generated co-allocation map.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: October 11, 2022
    Assignee: III Holdings 12, LLC
    Inventor: David Brian Jackson
  • Patent number: 11461122
    Abstract: A database can be instantly cloned from a source device to a target device by a cluster mapped to a database to be cloned. Nodes of the cluster are mapped over channels to directories of the database. Scripts are generated from one or more templates that specify the order and values to be executed to perform a database job, such as cloning the database to the target device using the mappings. To clone the database, a template can be executed that generates and populates scripts, which can be executed on the target device to provide a functioning cloned database using the mapped cluster.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: October 4, 2022
    Assignee: Rubrik, Inc.
    Inventors: Snehal Khandkar, Udbhav Prasad, Ganesh Karuppur Rajagopalan, Yongbing Eric Guo
  • Patent number: 11449364
    Abstract: A multicore processor is provided. In order to select one of the multiple cores in such a multicore processor, an execution time of tasks which are performed multiple times is determined. Based on the determined execution time on the individual cores, an appropriate core for further executions of a task is selected. Additionally, the present disclosure further provides a code generator and code generating method for providing appropriate machine code for the multicore processor.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: September 20, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Mikhail Petrovich Levin, Alexander Nikolaevich Filippov, Youliang Yan
  • Patent number: 11436044
    Abstract: A system and method for asynchronous, inter-run coordination, including: executing a set of runs; writing the run outputs to a shared coordination storage (e.g., a channel), and optionally determining whether a run should be suspended or restarted. When the run should be suspended, the run is suspended and an identifier for the run is stored in a run queue associated with the coordination storage. When a run should be restarted, the run is identified using the run identifier from the run queue and restarting using a value from the coordination storage.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: September 6, 2022
    Assignee: Precise.ly, Inc.
    Inventor: Aneil Mallavarapu