Priority Scheduling Patents (Class 718/103)
  • Patent number: 11573831
    Abstract: Embodiments for optimizing resource usage in a distributed computing environment. Resource usage of each task in a set of running tasks associated with a job is monitored to collect resource usage information corresponding to each respective task. A resource unit size of at least one resource allocated to respective tasks in the set of running tasks is adjusted based on the resource usage information to improve overall resource usage in the distributed computing environment.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: February 7, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Jie Li, Zhimin Lin, Jinming Lv, Guang Han Sui, Hao Zhou
  • Patent number: 11567899
    Abstract: Example distributed storage systems, delete managers, and methods provide for managing dependent delete operations among data stores. Dependent data operation entries and corresponding dependency sets may be identified in an operations log. Dependent data operations may be identified in each shard and data operation entries. A delete process for the data objects in the dependency set may be delayed until the delete process for the dependent data object completes.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: January 31, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Frederik De Schrijver, Thomas Demoor, Carl D'Halluin
  • Patent number: 11570235
    Abstract: A method comprising discovering workload attributes and identify dependencies, receiving utilization performance measurements including memory utilization measurements of at least a subset of workloads, grouping workloads based on the workload attributes, the dependencies, and the utilization performance measurements into affinity groups, determining at least one representative synthetic workload for each affinity group, each representative synthetic workload including a time slice of a predetermined period of time when there are maximum performance values for any number of utilization performance measurements among virtual machines of that particular affinity group, determining at least one cloud service provider (CSP)'s cloud services based on performance of the representative synthetic workloads, and generating a report for at least one of the representative synthetic workloads, the report identifying the at least one of the representative synthetic workloads and the at least one CSP's cloud services inclu
    Type: Grant
    Filed: August 15, 2021
    Date of Patent: January 31, 2023
    Assignee: Virtual Instruments Worldwide, Inc.
    Inventors: Rick Haggart, Rangaswamy Jagannathan, Michael Bello, Ricardo A. Negrete, Elizaveta Tavastcherna, Vitoo Suwannakinthorn
  • Patent number: 11558340
    Abstract: Systems and methods for providing an online platform that enables an organization to provide information to interested individuals are described. The organization requests individuals to contact elected officials to express support, rejections or comments for specific issues. The online platform determines an advocate's elected official(s) and facilitates a communication connection between the advocate and an elected official(s). Geocoding is performed using the individual's street address and zip code to obtain geographical coordinates, and the coordinates are geomatched to district matching databases to determine the individual's elected officials. The individual selects a preferred method of connecting, and the platform enables and facilitates the connection.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: January 17, 2023
    Assignee: Phone2Action, Inc.
    Inventors: Patrick Stoddart, Jebidiah Ory, Ximena Hartsock
  • Patent number: 11550383
    Abstract: One example method includes performing, in an edge device that includes a power source, operations including monitoring a running process and obtaining, based on the monitoring, power consumption information associated with the running process, adjusting, based on the power consumption information, a priority of the running process, and providing, to an entity, the power consumption information and/or information concerning the priority of the running process.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: January 10, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: James R. King, Amy Seibel
  • Patent number: 11550600
    Abstract: Embodiments are generally directed to a system and method for adapting executable object to a processing unit. An embodiment of a method to adapt an executable object from a first processing unit to a second processing unit, comprises: adapting the executable object optimized for the first processing unit of a first architecture, to the second processing unit of a second architecture, wherein the second architecture is different from the first architecture, wherein the executable object is adapted to perform on the second processing unit based on a plurality of performance metrics collected while the executable object is performed on the first processing unit and the second processing unit.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: January 10, 2023
    Assignee: INTEL CORPORATION
    Inventors: Li Xu, Haihao Xiang, Feng Chen, Travis Schluessler, Yuheng Zhang, Sen Lin
  • Patent number: 11550626
    Abstract: An information processing apparatus includes a memory and a processor couple to the memory and configured to generate one or more job groups by grouping multiple jobs of execution targets in descending order of priority, and perform a control for scheduling execution timings regarding the multiple jobs such that scheduling of respective jobs included in a specific job group including a job having a higher priority is implemented by priority over scheduling of respective jobs included in other job groups. The processor performs the control for scheduling the execution timings of the respective jobs included in the specific job group such that an execution completion time of all the jobs included in the specific job group satisfies a predetermined condition.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: January 10, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Ryuichi Sekizawa, Shigeto Suzuki
  • Patent number: 11526504
    Abstract: An improved data intake and query system that can perform and display ingest-time and search-time field extraction, redaction, copy, and/or categorization is described herein. As described herein, ingest-time field extraction, redaction, copy, and/or categorization may refer to field or field value extraction, redaction, copy, and/or categorization that is performed by a log observer system of the data intake and query system on raw machine data as the raw machine data is ingested or received from a publisher. As described herein, search-time field extraction, redaction, copy, and/or categorization may refer to field or field value extraction, redaction, copy, and/or categorization that is performed by the log observer system and/or other components of the improved data intake and query system on historical raw machine data that has already been ingested and indexed by the improved data intake and query system.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: December 13, 2022
    Assignee: Splunk Inc.
    Inventors: Amin Moshgabadi, Baibhav Gautam, Hema Krishnamurthy Mohan, Joshua Vertes
  • Patent number: 11507419
    Abstract: A task scheduling method comprises the steps of: in response to the reception of a request for processing a plurality of task sets, creating a current to-be-scheduled task queue in a task processing system based on priorities of the plurality of task sets and tasks in the plurality of task sets, where a plurality of to-be-scheduled tasks in the current to-be-scheduled task queue are scheduled in the same round of scheduling; allocating computing resources used for scheduling the plurality of to-be-scheduled tasks; and enabling the plurality of to-be-scheduled tasks in the current to-be-scheduled task queue to be scheduled by using the computing resources. In this manner, a plurality of tasks with different priorities and quotas can be scheduled according to SLA levels of users, and the efficiency and flexibility of parallel services of cloud computing deep learning models are improved by using a run-time load-balancing scheduling solution.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: November 22, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Jin Li, Jinpeng Liu, Wuichak Wong
  • Patent number: 11510066
    Abstract: The present technology allows coordination of channels of private wireless networks utilizing shared licensed and unlicensed spectrum. Wireless network operators in an enterprise location register to participate in a consortium and register licensed, shared, and unlicensed spectrum resources to be shared with other members of the consortium. The wireless network operators request an allocation of spectrum resources from the consortium. The consortium generates a radio resource management (“RRM”) plan for shared use of the licensed, shared, and unlicensed spectrum resources. The consortium combines the allocated licensed, shared, and unlicensed spectrum from each of the wireless network operators to meet the target RRM plan. The consortium monitors spectrum utilization to dynamically update the RRM. The consortium monitors spectrum utilization in real time to determine how closely the RRM plan matches the resources allocated to each wireless network operator.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: November 22, 2022
    Assignee: Cisco Technology, Inc.
    Inventors: Malcolm Smith, Jerome Henry, John Martin Graybeal, Vishal Satyendra Desai
  • Patent number: 11507289
    Abstract: A storage device includes a semiconductor memory device including memory blocks, planes which include the memory blocks and memory dies in which the planes are included; and a controller configured to store user data and metadata determined based on a command received from a host, in super memory blocks each including some of the memory blocks. The controller includes a segment queuing circuit configured to queue segments of the user data or the metadata to N (N is a natural number) virtual die queues according to a striping scheme; and a segment storage circuit configured to store the queued segments of the user data or the metadata in a super memory block among the super memory blocks, wherein the queued segments of the user data or the metadata are stored in the memory blocks included in the super memory block, according to a striping scheme.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: November 22, 2022
    Assignee: SK hynix Inc.
    Inventors: Dong-Ham Yim, Young-Guen Choi
  • Patent number: 11500700
    Abstract: A method, system, and computer program product for implementing indexes in a dispersed storage network (dsNet) are provided. The method accesses a work queue containing a set of work items as a set of key-value pairs. The key-value pairs are tuples including a work identifier and a work lease timestamp. The method selects a first work identifier and a first lease timestamp for a new work. The set of work items and the new work item are ordered according to a priority scheme to generate a modified work queue. Based on the modified work queue, the method transmits a work request to a plurality of data source units. The work request including a hash parameter and a bit parameter. The hash parameter is associated with a key-value pair of the modified work queue. The bit parameter indicates a number of bits of the hash parameter to consider.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: November 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: Thomas Dubucq, Gregory R. Dhuse
  • Patent number: 11487724
    Abstract: The embodiments provide a system and method for continuously updating a target repository to include both the latest data and corrected historical data. The system includes a data manager and at least two staging repositories. Each time the system retrieves the latest data from the source repository it also retrieves a portion historical data. Both the latest data and the historical data are transformed, and the historical portion of the transformed data is compared with corresponding data from the target repository to determine if there are any defects in the data from the target repository. The target repository is automatically updated if a defect is detected.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: November 1, 2022
    Assignee: United Services Automobile Association (USAA)
    Inventors: Eric Martin Gertonson, Robert Hugh Newman, II, Christopher John Hohimer, Jonathan Allen Meadows, Brett Justin Moan, Eric Overstreet
  • Patent number: 11489786
    Abstract: A quality of service (QoS) management system and guarantee is presented. The QoS management system can be used for end to end data. More specifically, and without limitation, the invention relates to the management of traffic and priorities in a queue and relates to grouping transactions in a queue providing solutions to queue starvation and transmission latency.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: November 1, 2022
    Assignee: ARTERIS, INC.
    Inventors: Michael Frank, Mohammed Khaleeluddin
  • Patent number: 11487562
    Abstract: A network-based virtual computing resource provider may offer virtual compute instances that implement rolling resource credits for scheduling virtual computing resources. Work requests for a virtual compute instance may be received at a virtualization manager. A resource credit balance may be determined for the virtual compute instance. The resource credit balance may accumulate resource credits in rolling fashion, carrying over unused credits from previous time periods. Resource credits may then be applied when generating scheduling instructions to provide to a physical resource to perform the work requests, such as a physical CPU in order to increase the utilization of the resource according to the number of credits applied. Applied resource credits may then be deducted from the credit balance.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: November 1, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: John Merrill Phillips, William John Earl, Deepak Singh
  • Patent number: 11481245
    Abstract: A system for determining a dependency task tree includes an interface and a processor. The interface is configured to receive a task list. The task list is associated with compiling, testing, packaging, and/or deploying a program. The processor is configured to determine a dependency task tree. The dependency task tree includes all tasks in the task list and all prerequisite tasks for each task in the task list and provides the dependency task tree. The interface is configured to receive the dependency task tree. The processor is configured to determine a set of tasks such that a task of the set of tasks does not depend on any other task; add the set of tasks to a task queue; in response to determining that all dependencies of the dependent task are completed; and continue executing tasks from the task queue until all tasks in the dependency task tree are completed.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: October 25, 2022
    Assignee: Workday, Inc.
    Inventor: Brian Oliver
  • Patent number: 11481250
    Abstract: A first workgroup is preempted in response to threads in the first workgroup executing a first wait instruction including a first value of a signal and a first hint indicating a type of modification for the signal. The first workgroup is scheduled for execution on a processor core based on a first context after preemption in response to the signal having the first value. A second workgroup is scheduled for execution on the processor core based on a second context in response to preempting the first workgroup and in response to the signal having a second value. A third context it is prefetched into registers of the processor core based on the first hint and the second value. The first context is stored in a first portion of the registers and the second context is prefetched into a second portion of the registers prior to preempting the first workgroup.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: October 25, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alexandru Dutu, Matthew David Sinclair, Bradford Beckmann, David A. Wood
  • Patent number: 11467846
    Abstract: Methods and systems related to the efficient execution of complex computations by a multicore processor and the movement of data among the various processing cores in the multicore processor are disclosed. A multicore processor stack for the multicore processor can include a computation layer, for conducting computations using the processing cores in the multicore processor, with executable instructions for processing pipelines in the processing cores. The multicore processor stack can also include a network-on-chip layer, for connecting the processing cores in the multicore processor, with executable instructions for routers and network interface units in the multicore processor. The computation layer and the network-on-chip layer can be logically isolated by a network-on-chip overlay layer.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: October 11, 2022
    Assignee: Tenstorrent Inc.
    Inventors: Davor Capalija, Ivan Matosevic, Jasmina Vasiljevic, Utku Aydonat, Andrew Lewycky, S. Alexander Chin, Ljubisa Bajic
  • Patent number: 11461132
    Abstract: A system for managing computational tasks in a queuing dataset includes at least one processor and a scheduler executed by the at least one processor. The scheduler is configured to simultaneously and circularly change an association of each of a plurality of computational task bins with a respective one of a plurality of time based priorities ordered in a fixed ascending order; receive a plurality of computational tasks; and allocate each of the plurality of computational tasks to one of the plurality of computational task bins according to a respective time constraint of the respective computational task and a current association of the plurality of computational task bins with the plurality of time based priorities. The scheduler is further configured to empty the computational task bin currently associated with the highest time based priority by sequentially outputting the computational tasks thereof.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: October 4, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Ayelet Wald, Dan Touitou, Michael Naaman, Alexander Kravtsov, Michael Charny, Max Komm
  • Patent number: 11449359
    Abstract: Systems and methods are configured to perform prioritized processing of a plurality of processing objects under a time constraint. In various embodiments, a priority policy that includes deterministic prioritization rules, probabilistic prioritization rules, and a priority determination machine learning model is applied to the objects to determine high and low priority subsets. Here, the subsets are determined using the deterministic prioritization rules and a probabilistic ordering of the low priority subset is determined using the probabilistic prioritization rules and the priority determination machine learning model. In particular embodiments, the ordering is accomplished by determining a hybrid priority score for each object in the low priority subset based on a rule-based priority score and a machine-learning-based priority score.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: September 20, 2022
    Assignee: Optum Services (Ireland) Limited
    Inventors: David T. Cleere, Amanda McFadden, Barry A. Friel, William A. Dunphy, Christopher A. McLaughlin
  • Patent number: 11429452
    Abstract: This disclosure includes an improvement to hashing methods, which can help achieve faster load balancing of computing resources (e.g., processors, storage systems, web servers or other computer systems, etc.) This improvement may be particularly beneficial when a quantity of the available resources changes. Such hashing methods may include assigning a data object associated with a key to a particular computing resource of the available computing resources by using two auxiliary functions that work together to uniformly distribute data objects across available computing resources and reduce an amount of time to assign the data object to the particular computing resource.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: August 30, 2022
    Assignee: PayPal, Inc.
    Inventor: Eric Leu
  • Patent number: 11409526
    Abstract: A system and method which allows the basic checkpoint-reverse-mode AD strategy (of recursively decomposing the computation to reduce storage requirements of reverse-mode AD) to be applied to arbitrary programs: not just programs consisting of loops, but programs with arbitrarily complex control flow. The method comprises (a) transforming the program into a formalism that allows convenient manipulation by formal tools, and (b) introducing a set of operators to allow computations to be decomposed by running them for a given period of time then pausing them, while treating the paused program as a value subject to manipulation.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: August 9, 2022
    Assignee: Purdue Research Foundation
    Inventors: Jeffrey Mark Siskind, Barak Avrum Pearlmutter
  • Patent number: 11403190
    Abstract: Techniques are provided for dynamic snapshot scheduling. In an example, a dynamic snapshot scheduler can analyze historical data about storage system resources. The dynamic snapshot scheduler can use this historical data to predict how the storage system resources will be used in the future. Based on this prediction, the dynamic snapshot scheduler can schedule snapshot activities for one or more times that are relatively unlikely to experience system resource contention. The dynamic snapshot scheduler can then initiate snapshot activities at those scheduled times.
    Type: Grant
    Filed: October 5, 2020
    Date of Patent: August 2, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Deepak Nagarajegowda, Parminder Singh Sethi
  • Patent number: 11397612
    Abstract: Embodiments may relate to an electronic device that includes a processor communicatively coupled with a hardware accelerator. The processor may be configured to identify, based on an indication of a priority level in a task control block (TCB), a location at which the TCB should be inserted in a queue of TCBs. The hardware accelerator may perform jobs related to the queue of TCBs in an order related to the order of TCBs within the queue. Other embodiments may be described or claimed.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: July 26, 2022
    Assignee: ANALOG DEVICES INTERNATIONAL UNLIMITED COMPANY
    Inventors: Abhijit Giri, Rajib Sarkar
  • Patent number: 11392273
    Abstract: An example embodiment may involve receiving, by a server device disposed within a remote network management platform, a request for a graphical representation of capabilities provided by a set of applications configured to execute on computing devices disposed within a managed network, and obtaining, by the server device, information regarding the capabilities provided by the set of applications. The embodiment may further involve transmitting, by the server device and to the client device, a representation of a graphical user interface that includes a first portion populated by representations of the capabilities with capability scores that are color-coded to represent how well their respective capabilities are serviced by the applications. The graphical user interface may also include a second portion that is configurable to display counts of the capability scores with each color coding, or a specific capability of the capabilities mapped to applications that support the specific capability.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: July 19, 2022
    Assignee: ServiceNow, Inc.
    Inventors: Shankar Janardhan Kattamanchi, Praveen Minnikaran Damodaran, Nitin Lahanu Hase, Yogesh Deepak Devatraj, Krishna Chaitanya Durgasi, Sharath Chandra Lagisetty, Krishna Chaitanya Kagitala
  • Patent number: 11392422
    Abstract: The present application relates to executing a containerized application in a nested manner on two separate container orchestration services. For example, a user may submit a request to a container orchestration service to execute a containerized application, and in response, instead of identifying one of the existing compute instances belonging to the user and executing the containerized application on the identified compute instance, the container orchestration service may generate and submit a request to a serverless container management service that can not only acquire compute resources on behalf of the container orchestration service but also manage the compute resources such that the container orchestration service (or the original requesting user) does not need to manage scaling, monitoring, patching, and security of the compute resources.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: July 19, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Onur Filiz, Archana Srikanta, Venkata Satya Shyam Jeedigunta, Micah William Hausler, Sri Saran Balaji Vellore Rajakumar, Eswar Chander Balasubramanian, Anirudh Balachandra Aithal
  • Patent number: 11372857
    Abstract: Systems and methods are provided for receiving an input comprising one or more attributes, selecting a subset of query options from a list of query options relevant to the attributes of the input, and based on query optimization results from an audit of previous queries, determining a priority order to execute each query in the set of queries based on the query optimization results, and executing each query in the priority order to generate a candidate list. For each candidate in the list of candidates, systems and methods are provided for selecting a subset of available workflows based on relevance to the candidate and based on workflow optimization results, determining an order in which the selected subset of workflows is to be executed, and executing the selected subset of workflows in the determined order to generate a match score indicating the probability that the candidate matches the input.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: June 28, 2022
    Assignee: SAP SE
    Inventors: Quincy Milton, Henry Tsai, Uma Kale, Adam Horacek, Justin Dority, Phillip DuLion, Ian Kelley, Michael Lentz, Ryan Skorupski, Aditi Godbole, Haizhen Zhang
  • Patent number: 11373158
    Abstract: In some embodiments, a transaction-related communication system includes one or more receiving modules configured for receiving a first item of inventory transaction information from a customer-facing interface, and receiving a second item of inventory transaction information from a merchant-facing point-of-sale interface. In some embodiments, the transaction-related communication system includes an inventory coordination module configured for rendering in a common internal format the first item of inventory transaction information from the customer-facing interface, and rendering in the common internal format the second item of inventory transaction information from the merchant-facing point-of-sale interface.
    Type: Grant
    Filed: July 24, 2015
    Date of Patent: June 28, 2022
    Assignee: WORLDPAY US, INC.
    Inventors: Nish Modi, George Cowsar, Oleksii Skutarenko
  • Patent number: 11372682
    Abstract: Example embodiments of the present invention provide a method, a system, and a computer program product for managing tasks in a system. The method comprises running a first task on a system, wherein the first task has a first priority of execution time and the execution of which first task locks a resource on the system, and running a second task on the system, wherein the second task has a second priority of execution time earlier than the first priority of execution time of the first task and the execution of which second task requires the resource on the system locked by the first task. The system then may promote the first task having the later first priority of execution time to a new priority of execution time at least as early as the second priority of execution time of the second task and resume execution of the first task having the later first priority of execution time.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: June 28, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Alexandr Veprinsky, Felix Shvaiger, Anton Kucherov, Arieh Don
  • Patent number: 11366769
    Abstract: Enabling peripheral device messaging via application portals in processor-based devices is disclosed herein. In one embodiment, a processor-based device comprises a processing element (PE) including an application portal configured to logically operate as a message store, and that is exposed as an application portal address within an address space visible to a peripheral device that is communicatively coupled to the processor-based device. Upon receiving a message directed to the application portal address from the peripheral device, an application portal control circuit enqueues the message in the application portal. In some embodiments, the PE may further provide a dequeue instruction that may be executed as part of the application, and that results in a top element of the application portal being dequeued and transmitted to the application.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: June 21, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Artur Klauser, Jason S. Wohlgemuth, Abolade Gbadegesin, Gagan Gupta, Soheil Ebadian, Thomas Philip Speier, Derek Chiou
  • Patent number: 11366692
    Abstract: Tasks of a group are respectively assigned to devices for execution. For each task, a completion time for a task is determined based on an associated cluster of the device to which the task has been assigned for execution is determined. If the completion time of a task exceeds an execution window of the device to which the task has been assigned, the task is removed from the group. The tasks remaining in the group are executed on the devices to which the tasks have been assigned for execution.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: June 21, 2022
    Assignee: MICRO FOCUS LLC
    Inventors: Krishna Mahadevan Ramakrishnan, Venkatesh Ramteke, Shiva Prakash Sm
  • Patent number: 11366601
    Abstract: An apparatus comprises at least one processing device comprising a processor coupled to a memory; the at least one processing device being configured to: obtain a set of rebuild rate parameters for a given storage device from a storage array comprising a plurality of storage devices; and dynamically regulate a rebuild rate associated with a rebuild process for the given storage device based on the set of rebuild rate parameters obtained from the storage array for the given storage device. For example, the set of rebuild rate parameters include a rebuild capacity parameter and a rebuild time parameter.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: June 21, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Vamsi K. Vankamamidi, Shuyu Lee, Kurt W. Everson, Pavan Kumar Vutukuri, Andrew P. Kubicki
  • Patent number: 11349729
    Abstract: An enhancement device (10, 116) for enhancing service requests (120) and a method of allocating network resources to a network service in a communication network is provided. The communication network comprises network resources capable of providing a network service specified in a service request issued by a client. The service request (120) comprises a direct part (121) and an indirect part (122), while the indirect part comprises at least one allocation condition.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: May 31, 2022
    Assignees: KONINKLIJKE KPN N.V., IMEC VZW, UNIVERSITEIT GENT
    Inventors: Wouter Tavernier, Didier Colle
  • Patent number: 11347547
    Abstract: Systems and methods including one or more processors and one or more non-transitory storage devices storing computing instructions configured to run on the one or more processors and perform acts of receiving one or more processing requests; assigning each respective processing request of the one or more processing requests to a respective queue of one or more queues; assigning each respective queue of the one or more queues to a respective processing node of one or more processing nodes; calculating a respective processing request backlog for each respective processing node of the one or more processing nodes; and limiting a processing rate of the respective processing node for processing requests of the one or more processing requests of the respective queue based on the respective processing request backlog for the respective processing node. Other embodiments are disclosed herein.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: May 31, 2022
    Assignee: WALMART APOLLO, LLC
    Inventor: Menkae Jeng
  • Patent number: 11347544
    Abstract: In one embodiment, a method includes generating one or more queues by an application executing on a client system, wherein each queue is associated with one or more declarative attributes, wherein each declarative attribute declares a processing requirement or a processing preference, generating one or more work items to be processed, for each of the one or more work items enqueuing the work item into a selected one of the one or more queues based on the one or more declarative attributes associated with the selected queue, and providing the one or more queues to a scheduler of an operating system of the client system, wherein the scheduler is configured to schedule each of the one or more work items for processing based on one or more policies and the one or more declarative attributes of the selected queue for that work item.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: May 31, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Vadim Victor Spivak, Bernhard Poess
  • Patent number: 11347552
    Abstract: Techniques for allocating resources in a system may include: monitoring, using a first proportional-integral-derivative (PID) controller, a size of a pool of free shared resources of a first type; responsive to determining the size of the pool of the free shared resources is at least a minimum threshold, providing the size of the pool of free shared resources as an input to a second PID controller; monitoring, using the second PID controller, a total amount of resources of the first type that are available; determining, using the second PID controller and in accordance with one or more resource policies for one or more applications, a deallocation rate or amount; deallocating, using the second PID controller and in accordance with the deallocation rate or amount, resources of the first type; and allocating a least a first of the deallocated resources for use by one of the applications.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: May 31, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Jonathan I. Krasner, Chakib Ouarraoui
  • Patent number: 11340948
    Abstract: A method for controlling transactional processing system having transactions that include multiple tasks, a throughput limit a transaction processing time limit includes allocating a plurality of threads to be used by multiple tasks to achieve a throughput approximating the throughput limit. The method assigns the multiple tasks to the plurality of threads and assigns respectively different processing delays to the plurality of threads. The processing delays span an interval less than the transaction processing time limit. The method processes the multiple tasks within the transaction processing time limit by executing the plurality of threads at times determined by the respective processing delays.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: May 24, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jovin Vasanth Kumar Deva Sahayam Arul Raj, Avinash G. Pillai, Apsara Karen Selvanayagam, Jinghua Chen
  • Patent number: 11334279
    Abstract: Example distributed storage systems, controller nodes, and methods provide hierarchical blacklisting of storage system components in response to failed storage requests. Storage elements are accessible through hierarchical storage paths traversing multiple system components. Blacklisted components are aggregated and evaluated against a hierarchy threshold at each level of the hierarchy and all components below the component are blacklisted if the hierarchy threshold is met. Blacklisted components are avoided during subsequent storage requests.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: May 17, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventors: Stijn Devriendt, Lien Boelaert, Arne De Coninck, Sam De Roeck
  • Patent number: 11336521
    Abstract: An acceleration resource scheduling method includes: receiving an acceleration instruction sent by the virtual machine, where the acceleration instruction includes to-be-accelerated data; determining a virtual accelerator allocated to the virtual machine; determining, based on the virtual accelerator, a network accelerator that is to process the acceleration instruction, and sending the acceleration instruction to the network accelerator, so that the network accelerator sends the acceleration instruction to a physical accelerator that is to process the acceleration instruction; receiving a computing result that is returned after the physical accelerator performs acceleration computing on the to-be-accelerated data by using the physical acceleration resource; and sending the computing result to the virtual machine.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: May 17, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Xiaolin Jia, Junjie Wang
  • Patent number: 11330047
    Abstract: Work-load management in a client-server infrastructure includes setting request information in accordance with request semantics corresponding to a type of request from a client. The request semantics include different request-types provided with different priorities during processing. Within a server, requests with high priority are included in a standard request processing queue. Further, requests with low priority are excluded from the standard request processing queue when server workload of the server exceeds a predetermined first threshold value.
    Type: Grant
    Filed: January 4, 2017
    Date of Patent: May 10, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthias Falkenberg, Andreas Nauerz, Sascha Sambale, Sven Ole Stueven
  • Patent number: 11322033
    Abstract: Method, apparatus, and computer program product are provided for assessing road surface condition. In some embodiments, candidate locations each forecast to have a dangerous road surface condition are determined, an optimized flight path is determined comprising a sequence of sites corresponding to the candidate locations, dispatch is made to a first site within the sequence, and a road surface condition at the first site is assessed using an onboard sensor (e.g., spectroradiometer). In some embodiments, a check for new information is performed before dispatch is made to a second site. In some embodiments, the candidate locations are determined using both a model forecast and data-mined locations considered hazardous. In some embodiments, the optimized flight path is determined using TSP optimization constrained by available flight time and prioritized by frequency of historical incident and severity of forecast road surface condition.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: May 3, 2022
    Assignee: International Business Machines Corporation
    Inventors: Eli M. Dow, Campbell D. Watson, Guillaume A. R. Auger, Michael E. Henderson
  • Patent number: 11323339
    Abstract: An example computing device is configured to receive, from a customer device, an indication of a plurality of resources and an indication of a plurality of customer services, each of the plurality of customer services being associated with a corresponding at least one requirement and a corresponding at least one constraint. The computing device is configured to automatically determine, for each requirement and each constraint, whether the requirement or the constraint can only be satisfied by a particular resource of the plurality of resources, and allocate, based on the determining, at least one resource of the plurality of resources to at least one customer service of the plurality of customer services. The example computing device is configured to provide, to the customer device and subsequent to the determining for every requirement and for every constraint, information to enable the customer device to provision the at least one customer service.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: May 3, 2022
    Assignee: Juniper Networks, Inc.
    Inventors: Gregory A. Sidebottom, Kireeti Kompella
  • Patent number: 11321263
    Abstract: An apparatus includes a first port set that includes an input port and an output port. The apparatus further includes a plurality of second port sets. Each of the second port sets includes an input port coupled to the output port of the first port set and an output port coupled to the input port of the first port set. The plurality of second port sets are to each communicate at a first maximum bandwidth and the first port set is to communicate at a second maximum bandwidth that is higher than the first maximum bandwidth.
    Type: Grant
    Filed: December 17, 2014
    Date of Patent: May 3, 2022
    Assignee: Intel Corporation
    Inventors: Himanshu Kaul, Mark A. Anders, Gregory K. Chen
  • Patent number: 11316952
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for limiting load on host servers that implement a social messaging platform. An example user device sends, to a platform comprising a plurality of host servers, a first request. The request is directed to a first endpoint. The user device receives, in response to the first request, a first error that indicates that the first request was not processed. The user device determines a back off time and places subsequent requests to the platform that are initiated before the back off time elapses and that are directed to the first endpoint in a back off queue in an order in which the subsequent requests are initiated. The user device sends, to the platform, the requests in the back off queue after the back off time has elapsed, until the back off queue is empty.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: April 26, 2022
    Assignee: Twitter, Inc.
    Inventor: Nolan Daniel O'Brien
  • Patent number: 11307988
    Abstract: A device includes a memory bank. The memory bank includes data portions of a first way group. The data portions of the first way group include a data portion of a first way of the first way group and a data portion of a second way of the first way group. The memory bank further includes data portions of a second way group. The device further includes a configuration register and a controller configured to individually allocate, based on one or more settings in the configuration register, the first way and the second way to one of an addressable memory space and a data cache.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: April 19, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Kai Chirca, Matthew David Pierson
  • Patent number: 11301340
    Abstract: Distributed processors and methods for compiling code for execution by distributed processors are disclosed. In one implementation, a distributed processor may include a substrate; a memory array disposed on the substrate; and a processing array disposed on the substrate. The memory array may include a plurality of discrete memory banks, and the processing array may include a plurality of processor subunits, each one of the processor subunits being associated with a corresponding, dedicated one of the plurality of discrete memory banks. The distributed processor may further include a first plurality of buses, each connecting one of the plurality of processor subunits to its corresponding, dedicated memory bank, and a second plurality of buses, each connecting one of the plurality of processor subunits to another of the plurality of processor subunits.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: April 12, 2022
    Assignee: NeuroBlade Ltd.
    Inventors: Elad Sity, Eliad Hillel
  • Patent number: 11301293
    Abstract: A job scheduler system includes one or more hardware processors, a memory including a job group queue stored in the memory, and a job scheduler engine configured to create a first job group in the job group queue, the first job group includes a generation counter having an initial value, receive a first request to steal the first job group, determine a state of the first job group based at least in part on the generation counter, the state indicating that the first job group is available to steal, based on the determining the state of the first job group, atomically increment the generation counter, thereby making the first job group unavailable for stealing, and alter an execution order of the first job group ahead of at least one other job group in the job group queue.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: April 12, 2022
    Assignee: Unity IPR ApS
    Inventor: Benoit Sevigny
  • Patent number: 11301445
    Abstract: A graph-based program specification includes: a plurality of components, each corresponding to a processing task and including one or more ports for sending or receiving one or more data elements; and one or more links, each connecting an output port of an upstream component of the plurality of components to an input port of a downstream component of the plurality of components. Prepared code is generated representing subsets of the plurality of components, including: identifying a plurality of subset boundaries between components in different subsets based at least in part on characteristics of linked components; forming the subsets based on the identified subset boundaries; and generating prepared code for each formed subset that when used for execution by a runtime system causes processing tasks corresponding to the components in that formed subset to be performed according to information embedded in the prepared code for that formed subset.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: April 12, 2022
    Assignee: Ab Initio Technology LLC
    Inventors: Craig W. Stanfill, Richard Shapiro, Stephen A. Kukolich
  • Patent number: 11294821
    Abstract: A write-back cache device of an embodiment includes a first storage device capable of storing n pieces of unit data in each of a plurality of cache lines, a second storage device configured to store state instruction data in each of the plurality of cache lines, and a cache controller configured to control inputting to and outputting from the first and second storage devices. The state instruction data has a first value when data in a cache line is not different from data in a main memory, has a second value when two or more pieces of unit data are different from data in the main memory, or has a third value when only one piece of unit data is different from data in the main memory.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: April 5, 2022
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA ELECTRONIC DEVICES & STORAGE CORPORATION
    Inventor: Nobuaki Sakamoto
  • Patent number: 11288569
    Abstract: A vehicular driving assistance system includes an exterior viewing camera disposed at a vehicle and an ECU disposed at the vehicle for processing captured image data to detect an object exterior of the vehicle. The ECU performs processing tasks for multiple vehicle systems, including at least (i) a headlamp control system, (ii) a collision avoidance system and (iii) a lane departure warning system. Responsive to determination at the ECU that one of the multiple vehicle systems requires safety critical processing, (i) processing for that vehicle system is determined at the ECU to be a higher priority task, (ii) the ECU performs safety critical processing for that higher priority task and (iii) lower priority processing tasks are shifted from the ECU to other processors within the vehicle so that the ECU maximizes safety critical processing for that higher priority task.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: March 29, 2022
    Assignee: MAGNA ELECTRONICS INC.
    Inventor: John Lu