Priority Scheduling Patents (Class 718/103)
-
Patent number: 11392422Abstract: The present application relates to executing a containerized application in a nested manner on two separate container orchestration services. For example, a user may submit a request to a container orchestration service to execute a containerized application, and in response, instead of identifying one of the existing compute instances belonging to the user and executing the containerized application on the identified compute instance, the container orchestration service may generate and submit a request to a serverless container management service that can not only acquire compute resources on behalf of the container orchestration service but also manage the compute resources such that the container orchestration service (or the original requesting user) does not need to manage scaling, monitoring, patching, and security of the compute resources.Type: GrantFiled: November 27, 2019Date of Patent: July 19, 2022Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Onur Filiz, Archana Srikanta, Venkata Satya Shyam Jeedigunta, Micah William Hausler, Sri Saran Balaji Vellore Rajakumar, Eswar Chander Balasubramanian, Anirudh Balachandra Aithal
-
Patent number: 11392273Abstract: An example embodiment may involve receiving, by a server device disposed within a remote network management platform, a request for a graphical representation of capabilities provided by a set of applications configured to execute on computing devices disposed within a managed network, and obtaining, by the server device, information regarding the capabilities provided by the set of applications. The embodiment may further involve transmitting, by the server device and to the client device, a representation of a graphical user interface that includes a first portion populated by representations of the capabilities with capability scores that are color-coded to represent how well their respective capabilities are serviced by the applications. The graphical user interface may also include a second portion that is configurable to display counts of the capability scores with each color coding, or a specific capability of the capabilities mapped to applications that support the specific capability.Type: GrantFiled: September 4, 2020Date of Patent: July 19, 2022Assignee: ServiceNow, Inc.Inventors: Shankar Janardhan Kattamanchi, Praveen Minnikaran Damodaran, Nitin Lahanu Hase, Yogesh Deepak Devatraj, Krishna Chaitanya Durgasi, Sharath Chandra Lagisetty, Krishna Chaitanya Kagitala
-
Patent number: 11373158Abstract: In some embodiments, a transaction-related communication system includes one or more receiving modules configured for receiving a first item of inventory transaction information from a customer-facing interface, and receiving a second item of inventory transaction information from a merchant-facing point-of-sale interface. In some embodiments, the transaction-related communication system includes an inventory coordination module configured for rendering in a common internal format the first item of inventory transaction information from the customer-facing interface, and rendering in the common internal format the second item of inventory transaction information from the merchant-facing point-of-sale interface.Type: GrantFiled: July 24, 2015Date of Patent: June 28, 2022Assignee: WORLDPAY US, INC.Inventors: Nish Modi, George Cowsar, Oleksii Skutarenko
-
Patent number: 11372857Abstract: Systems and methods are provided for receiving an input comprising one or more attributes, selecting a subset of query options from a list of query options relevant to the attributes of the input, and based on query optimization results from an audit of previous queries, determining a priority order to execute each query in the set of queries based on the query optimization results, and executing each query in the priority order to generate a candidate list. For each candidate in the list of candidates, systems and methods are provided for selecting a subset of available workflows based on relevance to the candidate and based on workflow optimization results, determining an order in which the selected subset of workflows is to be executed, and executing the selected subset of workflows in the determined order to generate a match score indicating the probability that the candidate matches the input.Type: GrantFiled: October 29, 2020Date of Patent: June 28, 2022Assignee: SAP SEInventors: Quincy Milton, Henry Tsai, Uma Kale, Adam Horacek, Justin Dority, Phillip DuLion, Ian Kelley, Michael Lentz, Ryan Skorupski, Aditi Godbole, Haizhen Zhang
-
Patent number: 11372682Abstract: Example embodiments of the present invention provide a method, a system, and a computer program product for managing tasks in a system. The method comprises running a first task on a system, wherein the first task has a first priority of execution time and the execution of which first task locks a resource on the system, and running a second task on the system, wherein the second task has a second priority of execution time earlier than the first priority of execution time of the first task and the execution of which second task requires the resource on the system locked by the first task. The system then may promote the first task having the later first priority of execution time to a new priority of execution time at least as early as the second priority of execution time of the second task and resume execution of the first task having the later first priority of execution time.Type: GrantFiled: March 11, 2020Date of Patent: June 28, 2022Assignee: EMC IP Holding Company LLCInventors: Alexandr Veprinsky, Felix Shvaiger, Anton Kucherov, Arieh Don
-
Patent number: 11366769Abstract: Enabling peripheral device messaging via application portals in processor-based devices is disclosed herein. In one embodiment, a processor-based device comprises a processing element (PE) including an application portal configured to logically operate as a message store, and that is exposed as an application portal address within an address space visible to a peripheral device that is communicatively coupled to the processor-based device. Upon receiving a message directed to the application portal address from the peripheral device, an application portal control circuit enqueues the message in the application portal. In some embodiments, the PE may further provide a dequeue instruction that may be executed as part of the application, and that results in a top element of the application portal being dequeued and transmitted to the application.Type: GrantFiled: February 25, 2021Date of Patent: June 21, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Artur Klauser, Jason S. Wohlgemuth, Abolade Gbadegesin, Gagan Gupta, Soheil Ebadian, Thomas Philip Speier, Derek Chiou
-
Patent number: 11366692Abstract: Tasks of a group are respectively assigned to devices for execution. For each task, a completion time for a task is determined based on an associated cluster of the device to which the task has been assigned for execution is determined. If the completion time of a task exceeds an execution window of the device to which the task has been assigned, the task is removed from the group. The tasks remaining in the group are executed on the devices to which the tasks have been assigned for execution.Type: GrantFiled: October 25, 2019Date of Patent: June 21, 2022Assignee: MICRO FOCUS LLCInventors: Krishna Mahadevan Ramakrishnan, Venkatesh Ramteke, Shiva Prakash Sm
-
Patent number: 11366601Abstract: An apparatus comprises at least one processing device comprising a processor coupled to a memory; the at least one processing device being configured to: obtain a set of rebuild rate parameters for a given storage device from a storage array comprising a plurality of storage devices; and dynamically regulate a rebuild rate associated with a rebuild process for the given storage device based on the set of rebuild rate parameters obtained from the storage array for the given storage device. For example, the set of rebuild rate parameters include a rebuild capacity parameter and a rebuild time parameter.Type: GrantFiled: June 22, 2020Date of Patent: June 21, 2022Assignee: EMC IP Holding Company LLCInventors: Vamsi K. Vankamamidi, Shuyu Lee, Kurt W. Everson, Pavan Kumar Vutukuri, Andrew P. Kubicki
-
Patent number: 11347552Abstract: Techniques for allocating resources in a system may include: monitoring, using a first proportional-integral-derivative (PID) controller, a size of a pool of free shared resources of a first type; responsive to determining the size of the pool of the free shared resources is at least a minimum threshold, providing the size of the pool of free shared resources as an input to a second PID controller; monitoring, using the second PID controller, a total amount of resources of the first type that are available; determining, using the second PID controller and in accordance with one or more resource policies for one or more applications, a deallocation rate or amount; deallocating, using the second PID controller and in accordance with the deallocation rate or amount, resources of the first type; and allocating a least a first of the deallocated resources for use by one of the applications.Type: GrantFiled: May 29, 2020Date of Patent: May 31, 2022Assignee: EMC IP Holding Company LLCInventors: Jonathan I. Krasner, Chakib Ouarraoui
-
Patent number: 11347547Abstract: Systems and methods including one or more processors and one or more non-transitory storage devices storing computing instructions configured to run on the one or more processors and perform acts of receiving one or more processing requests; assigning each respective processing request of the one or more processing requests to a respective queue of one or more queues; assigning each respective queue of the one or more queues to a respective processing node of one or more processing nodes; calculating a respective processing request backlog for each respective processing node of the one or more processing nodes; and limiting a processing rate of the respective processing node for processing requests of the one or more processing requests of the respective queue based on the respective processing request backlog for the respective processing node. Other embodiments are disclosed herein.Type: GrantFiled: March 9, 2020Date of Patent: May 31, 2022Assignee: WALMART APOLLO, LLCInventor: Menkae Jeng
-
Patent number: 11347544Abstract: In one embodiment, a method includes generating one or more queues by an application executing on a client system, wherein each queue is associated with one or more declarative attributes, wherein each declarative attribute declares a processing requirement or a processing preference, generating one or more work items to be processed, for each of the one or more work items enqueuing the work item into a selected one of the one or more queues based on the one or more declarative attributes associated with the selected queue, and providing the one or more queues to a scheduler of an operating system of the client system, wherein the scheduler is configured to schedule each of the one or more work items for processing based on one or more policies and the one or more declarative attributes of the selected queue for that work item.Type: GrantFiled: September 26, 2019Date of Patent: May 31, 2022Assignee: Facebook Technologies, LLC.Inventors: Vadim Victor Spivak, Bernhard Poess
-
Patent number: 11349729Abstract: An enhancement device (10, 116) for enhancing service requests (120) and a method of allocating network resources to a network service in a communication network is provided. The communication network comprises network resources capable of providing a network service specified in a service request issued by a client. The service request (120) comprises a direct part (121) and an indirect part (122), while the indirect part comprises at least one allocation condition.Type: GrantFiled: December 30, 2016Date of Patent: May 31, 2022Assignees: KONINKLIJKE KPN N.V., IMEC VZW, UNIVERSITEIT GENTInventors: Wouter Tavernier, Didier Colle
-
Patent number: 11340948Abstract: A method for controlling transactional processing system having transactions that include multiple tasks, a throughput limit a transaction processing time limit includes allocating a plurality of threads to be used by multiple tasks to achieve a throughput approximating the throughput limit. The method assigns the multiple tasks to the plurality of threads and assigns respectively different processing delays to the plurality of threads. The processing delays span an interval less than the transaction processing time limit. The method processes the multiple tasks within the transaction processing time limit by executing the plurality of threads at times determined by the respective processing delays.Type: GrantFiled: September 20, 2019Date of Patent: May 24, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Jovin Vasanth Kumar Deva Sahayam Arul Raj, Avinash G. Pillai, Apsara Karen Selvanayagam, Jinghua Chen
-
Patent number: 11336521Abstract: An acceleration resource scheduling method includes: receiving an acceleration instruction sent by the virtual machine, where the acceleration instruction includes to-be-accelerated data; determining a virtual accelerator allocated to the virtual machine; determining, based on the virtual accelerator, a network accelerator that is to process the acceleration instruction, and sending the acceleration instruction to the network accelerator, so that the network accelerator sends the acceleration instruction to a physical accelerator that is to process the acceleration instruction; receiving a computing result that is returned after the physical accelerator performs acceleration computing on the to-be-accelerated data by using the physical acceleration resource; and sending the computing result to the virtual machine.Type: GrantFiled: May 13, 2020Date of Patent: May 17, 2022Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Xiaolin Jia, Junjie Wang
-
Patent number: 11334279Abstract: Example distributed storage systems, controller nodes, and methods provide hierarchical blacklisting of storage system components in response to failed storage requests. Storage elements are accessible through hierarchical storage paths traversing multiple system components. Blacklisted components are aggregated and evaluated against a hierarchy threshold at each level of the hierarchy and all components below the component are blacklisted if the hierarchy threshold is met. Blacklisted components are avoided during subsequent storage requests.Type: GrantFiled: November 14, 2019Date of Patent: May 17, 2022Assignee: Western Digital Technologies, Inc.Inventors: Stijn Devriendt, Lien Boelaert, Arne De Coninck, Sam De Roeck
-
Patent number: 11330047Abstract: Work-load management in a client-server infrastructure includes setting request information in accordance with request semantics corresponding to a type of request from a client. The request semantics include different request-types provided with different priorities during processing. Within a server, requests with high priority are included in a standard request processing queue. Further, requests with low priority are excluded from the standard request processing queue when server workload of the server exceeds a predetermined first threshold value.Type: GrantFiled: January 4, 2017Date of Patent: May 10, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Matthias Falkenberg, Andreas Nauerz, Sascha Sambale, Sven Ole Stueven
-
Patent number: 11322033Abstract: Method, apparatus, and computer program product are provided for assessing road surface condition. In some embodiments, candidate locations each forecast to have a dangerous road surface condition are determined, an optimized flight path is determined comprising a sequence of sites corresponding to the candidate locations, dispatch is made to a first site within the sequence, and a road surface condition at the first site is assessed using an onboard sensor (e.g., spectroradiometer). In some embodiments, a check for new information is performed before dispatch is made to a second site. In some embodiments, the candidate locations are determined using both a model forecast and data-mined locations considered hazardous. In some embodiments, the optimized flight path is determined using TSP optimization constrained by available flight time and prioritized by frequency of historical incident and severity of forecast road surface condition.Type: GrantFiled: August 27, 2019Date of Patent: May 3, 2022Assignee: International Business Machines CorporationInventors: Eli M. Dow, Campbell D. Watson, Guillaume A. R. Auger, Michael E. Henderson
-
Patent number: 11321263Abstract: An apparatus includes a first port set that includes an input port and an output port. The apparatus further includes a plurality of second port sets. Each of the second port sets includes an input port coupled to the output port of the first port set and an output port coupled to the input port of the first port set. The plurality of second port sets are to each communicate at a first maximum bandwidth and the first port set is to communicate at a second maximum bandwidth that is higher than the first maximum bandwidth.Type: GrantFiled: December 17, 2014Date of Patent: May 3, 2022Assignee: Intel CorporationInventors: Himanshu Kaul, Mark A. Anders, Gregory K. Chen
-
Patent number: 11323339Abstract: An example computing device is configured to receive, from a customer device, an indication of a plurality of resources and an indication of a plurality of customer services, each of the plurality of customer services being associated with a corresponding at least one requirement and a corresponding at least one constraint. The computing device is configured to automatically determine, for each requirement and each constraint, whether the requirement or the constraint can only be satisfied by a particular resource of the plurality of resources, and allocate, based on the determining, at least one resource of the plurality of resources to at least one customer service of the plurality of customer services. The example computing device is configured to provide, to the customer device and subsequent to the determining for every requirement and for every constraint, information to enable the customer device to provision the at least one customer service.Type: GrantFiled: August 27, 2021Date of Patent: May 3, 2022Assignee: Juniper Networks, Inc.Inventors: Gregory A. Sidebottom, Kireeti Kompella
-
Patent number: 11316952Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for limiting load on host servers that implement a social messaging platform. An example user device sends, to a platform comprising a plurality of host servers, a first request. The request is directed to a first endpoint. The user device receives, in response to the first request, a first error that indicates that the first request was not processed. The user device determines a back off time and places subsequent requests to the platform that are initiated before the back off time elapses and that are directed to the first endpoint in a back off queue in an order in which the subsequent requests are initiated. The user device sends, to the platform, the requests in the back off queue after the back off time has elapsed, until the back off queue is empty.Type: GrantFiled: January 29, 2021Date of Patent: April 26, 2022Assignee: Twitter, Inc.Inventor: Nolan Daniel O'Brien
-
Patent number: 11307988Abstract: A device includes a memory bank. The memory bank includes data portions of a first way group. The data portions of the first way group include a data portion of a first way of the first way group and a data portion of a second way of the first way group. The memory bank further includes data portions of a second way group. The device further includes a configuration register and a controller configured to individually allocate, based on one or more settings in the configuration register, the first way and the second way to one of an addressable memory space and a data cache.Type: GrantFiled: October 15, 2019Date of Patent: April 19, 2022Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Kai Chirca, Matthew David Pierson
-
Patent number: 11301293Abstract: A job scheduler system includes one or more hardware processors, a memory including a job group queue stored in the memory, and a job scheduler engine configured to create a first job group in the job group queue, the first job group includes a generation counter having an initial value, receive a first request to steal the first job group, determine a state of the first job group based at least in part on the generation counter, the state indicating that the first job group is available to steal, based on the determining the state of the first job group, atomically increment the generation counter, thereby making the first job group unavailable for stealing, and alter an execution order of the first job group ahead of at least one other job group in the job group queue.Type: GrantFiled: January 16, 2020Date of Patent: April 12, 2022Assignee: Unity IPR ApSInventor: Benoit Sevigny
-
Patent number: 11301340Abstract: Distributed processors and methods for compiling code for execution by distributed processors are disclosed. In one implementation, a distributed processor may include a substrate; a memory array disposed on the substrate; and a processing array disposed on the substrate. The memory array may include a plurality of discrete memory banks, and the processing array may include a plurality of processor subunits, each one of the processor subunits being associated with a corresponding, dedicated one of the plurality of discrete memory banks. The distributed processor may further include a first plurality of buses, each connecting one of the plurality of processor subunits to its corresponding, dedicated memory bank, and a second plurality of buses, each connecting one of the plurality of processor subunits to another of the plurality of processor subunits.Type: GrantFiled: December 4, 2020Date of Patent: April 12, 2022Assignee: NeuroBlade Ltd.Inventors: Elad Sity, Eliad Hillel
-
Patent number: 11301445Abstract: A graph-based program specification includes: a plurality of components, each corresponding to a processing task and including one or more ports for sending or receiving one or more data elements; and one or more links, each connecting an output port of an upstream component of the plurality of components to an input port of a downstream component of the plurality of components. Prepared code is generated representing subsets of the plurality of components, including: identifying a plurality of subset boundaries between components in different subsets based at least in part on characteristics of linked components; forming the subsets based on the identified subset boundaries; and generating prepared code for each formed subset that when used for execution by a runtime system causes processing tasks corresponding to the components in that formed subset to be performed according to information embedded in the prepared code for that formed subset.Type: GrantFiled: December 3, 2019Date of Patent: April 12, 2022Assignee: Ab Initio Technology LLCInventors: Craig W. Stanfill, Richard Shapiro, Stephen A. Kukolich
-
Patent number: 11294821Abstract: A write-back cache device of an embodiment includes a first storage device capable of storing n pieces of unit data in each of a plurality of cache lines, a second storage device configured to store state instruction data in each of the plurality of cache lines, and a cache controller configured to control inputting to and outputting from the first and second storage devices. The state instruction data has a first value when data in a cache line is not different from data in a main memory, has a second value when two or more pieces of unit data are different from data in the main memory, or has a third value when only one piece of unit data is different from data in the main memory.Type: GrantFiled: February 26, 2021Date of Patent: April 5, 2022Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA ELECTRONIC DEVICES & STORAGE CORPORATIONInventor: Nobuaki Sakamoto
-
Patent number: 11287872Abstract: Systems and methods for multi-thread power limiting via a shared limit estimates power consumed in a processing core on a thread-by-thread basis by counting how many power events occur in each thread. Power consumed by each thread is approximated based on the number of power events that have occurred. Power consumed by individual threads is compared to a shared power limit derived from a sum of the power consumed by all threads. Threads that are above the shared power limit are stalled while threads below the shared power limit are allowed to continue without throttling. In this fashion, the most power intensive threads are throttled to stay below the shared power limit while still maintaining performance.Type: GrantFiled: March 25, 2020Date of Patent: March 29, 2022Assignee: Qualcomm IncorporatedInventors: Eric Wayne Mahurin, Vijay Kiran Kalyanam
-
Patent number: 11288569Abstract: A vehicular driving assistance system includes an exterior viewing camera disposed at a vehicle and an ECU disposed at the vehicle for processing captured image data to detect an object exterior of the vehicle. The ECU performs processing tasks for multiple vehicle systems, including at least (i) a headlamp control system, (ii) a collision avoidance system and (iii) a lane departure warning system. Responsive to determination at the ECU that one of the multiple vehicle systems requires safety critical processing, (i) processing for that vehicle system is determined at the ECU to be a higher priority task, (ii) the ECU performs safety critical processing for that higher priority task and (iii) lower priority processing tasks are shifted from the ECU to other processors within the vehicle so that the ECU maximizes safety critical processing for that higher priority task.Type: GrantFiled: May 11, 2020Date of Patent: March 29, 2022Assignee: MAGNA ELECTRONICS INC.Inventor: John Lu
-
Patent number: 11269746Abstract: A method performed by a computing device having memory is provided. The method includes (a) detecting corruption in a first page description block (PDB) of a plurality of PDBs stored in sequence in the memory, each PDB storing a set of page descriptors (PDs) that point to pages of data sequentially stored in the memory that are part of a single transaction, PDBs that represent the same transaction being contiguous within the sequence; (b) searching for a second PDB of the plurality of PDBs, the second PDB satisfying the following criteria: (1) it is not corrupted, and (2) it represents a same transaction as the first PDB; and (c) reconstructing the first PDB using the second PDB. An apparatus, system, and computer program product for performing a similar method are also provided.Type: GrantFiled: January 22, 2021Date of Patent: March 8, 2022Assignee: EMC IP Holding Company LLCInventors: Edward Zhao, Socheavy Heng, Sihang Xia, Xinlei Xu, Vamsi K. Vankamamidi
-
Patent number: 11256547Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, that facilitate efficient allocation of computing resources of a cloud computing environment to job requests. Methods include receiving multiple job requests and sorting these job requests into one or more categories that include job requests with a same or similar set of job attributes. Methods include allocating a first number of computing resources of the compute farm to one or more job requests in each category. Methods include determining an allocation rate at which the first number of computing resources are allocated to the one or more job requests in each category. Methods include determining a remaining number of job requests in each category and allocating a second number of computing resources of the compute farm to the remaining number of job requests in each category based on the allocation rate.Type: GrantFiled: May 13, 2020Date of Patent: February 22, 2022Assignee: Altair Engineering, Inc.Inventor: Andrea Casotto
-
Patent number: 11249801Abstract: A job scheduling system includes a primary job scheduler and a secondary scheduling gatekeeper. The primary job scheduler provides primary scheduling primitives. The primary job scheduler is configured to activate a first job on an activation date determined based on a primary scheduling definition of the first job, and execute a secondary scheduling gatekeeper to evaluate whether a target program associated with the first job is executed during the activation. The gatekeeper provides enhanced scheduling primitives that include scheduling primitives not in the primary scheduling primitives. The gatekeeper is configured to evaluate a secondary scheduling definition of the first job to determine whether the first job should continue to execution and return the enhanced scheduling result to the primary job scheduler. The secondary scheduling definition is configured using the set of enhanced scheduling primitives. The system causes the execution of the target program based on the result.Type: GrantFiled: April 10, 2020Date of Patent: February 15, 2022Assignee: MASTERCARD INTERNATIONAL INCORPORATEDInventor: Gokulakrishnan Seshiah
-
Patent number: 11243808Abstract: An information processing apparatus includes a memory and a processor. The memory stores a first queue being registered a newly generated task, and a second queue being registered a thread in an executable state among threads assigned to the task. The processor performs a process including: judging execution priority of a second task registered in the first queue and of a second thread registered in the second queue when execution of a first task by a first thread ends, retrieving, if it is judged that the second thread is to be executed first, the second thread from the second queue and executing a task, to which the second thread is assigned, by the second thread, and retrieving, if it is judged that the second task is to be executed first, the second task from the first queue and executing the second task by the first thread.Type: GrantFiled: March 3, 2020Date of Patent: February 8, 2022Assignee: FUJITSU LIMITEDInventor: Munenori Maeda
-
Patent number: 11237864Abstract: Methods and systems for improving the performance of a distributed job scheduler using job self-scheduling and job stealing are described. The distributed job scheduler may schedule jobs to be run among data storage nodes within a cluster. Each node in the cluster may make a localized decision regarding which jobs should be executed by the node by periodically polling candidate jobs from a table of candidate jobs stored using a distributed metadata store. Upon completion of a job, the job may self-schedule another instance of itself if the next instance of the job should be run before the next polling of candidate jobs by the node that ran the completed job. The node may attempt to steal one or more jobs from a second node within the cluster if a job queue length for a job queue associated with the node falls below a queue length threshold.Type: GrantFiled: February 6, 2018Date of Patent: February 1, 2022Assignee: Rubrik, Inc.Inventor: Fabiano Botelho
-
Patent number: 11226838Abstract: A method for managing AI components installed in containers is provided. The container-based component management method creates a container, installs at least one selected from a plurality of components in the container, and manages the components installed in the container. Accordingly, the execution priorities of the AI components installed in the containers can be managed and operated, such that degradation of system performance and frequent error occurrence can be prevented.Type: GrantFiled: December 26, 2018Date of Patent: January 18, 2022Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventors: Ki Man Jeon, Jae Gi Son
-
Patent number: 11210134Abstract: A computer-implemented method for translating file system operations to object store operations may include necessary steps to receive a plurality of file system operations for operating files in a file system; determine corresponding objects and object store operations in an object store for the files and the file system operations; determine an order of the object store operations based on time of the file system operations received in the file system; determine dependency of the object store operations, and assign the object store operations to a first queue based on the order and dependency; determine priority of the object store operations, and transfer an entry containing an object store operation with the priority from the first queue to a second queue; and execute the object store operations in parallel and asynchronously based on organization of the object store operations in the first and second queues.Type: GrantFiled: December 27, 2016Date of Patent: December 28, 2021Assignee: Western Digital Technologies, Inc.Inventors: Bruno Keymolen, Wim Michel Marcel De Wispelaere
-
Patent number: 11194800Abstract: Systems, methods, and computer-executable instructions for parallel searching in program synthesis. A task to synthesize in a domain specific language (DSL) is received. The task is synthesized. Synthesizing the task includes generating sub-goals based on the task. The synthesized task includes a subset of the sub-goals. An estimated completion time for each of the sub-goals is expressed using the DSL is determined. The sub-goals are scheduled based on the estimated completion time. Some of the sub-goals are scheduled to be executed in parallel. The sub-goals are solved based on the scheduling to synthesize the task in the DSL. An elapsed real time to complete the synthesizing the task is reduced compared to scheduling the sub-goals in an order based on sub-goal generation.Type: GrantFiled: April 26, 2018Date of Patent: December 7, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Sumit Gulwani, Abhishek Udupa, Michael Vollmer
-
Patent number: 11194759Abstract: A priority queue including an order of local data relocation operations to be performed by a plurality of solid-state storage devices is maintained. An indication of a new local data relocation operation is received from a solid-state storage device of the plurality of solid-state storage devices for data stored at the solid-state storage device, the indication including information associated with the data. The new local data relocation operation is inserted into a position in the order of the priority queue based on the information associated with the data.Type: GrantFiled: March 11, 2020Date of Patent: December 7, 2021Assignee: Pure Storage, Inc.Inventors: Sankara Vaideeswaran, Hari Kannan, Gordon James Coleman
-
Patent number: 11188348Abstract: Methods, systems, and computer program products for hardware device selection in a computing environment are provided. Aspects include receiving, by a processor, a request to execute a programming code, wherein the processor is operating in a hybrid computing environment comprising a plurality of hardware devices. A performance model associated with the programming code is obtained by the processor. Runtime data associated with the programming code is obtained by the processor. The runtime data is fed in to the performance model to determine an execution cost for executing the programming code on each of the plurality of hardware devices and a target hardware device is selected from the plurality of hardware devices based on the execution costs.Type: GrantFiled: August 31, 2018Date of Patent: November 30, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Artem Chikin, Ettore Tiotto, Jose N. Amaral, Karim Ali
-
Patent number: 11182205Abstract: An apparatus includes multiple processors, a classifier and queue management logic. The classifier is configured to classify tasks, which are received for execution by the processors, into multiple processor queues, each processor queue associated with a single processor or thread, and configured to temporarily store task entries that represent the tasks, and to send the tasks for execution by the associated processors. The queue management logic is configured to set, based on queue-lengths of the queues, an affinity strictness measure that quantifies a strictness with which the tasks of a same classified queue are to be processed by a same processor, and to assign the task entries to the queues while complying with the affinity strictness measure.Type: GrantFiled: January 2, 2019Date of Patent: November 23, 2021Assignee: MELLANOX TECHNOLOGIES, LTD.Inventors: Amir Rosen, Tsofia Eshel
-
Patent number: 11182108Abstract: Embodiments of the present disclosure relate to a memory system, a memory controller, and an operation method. The present disclosure may divide user data and map data corresponding to the user data into data segments, may input the data segments in N virtual die queues, and may program the same in a memory device, wherein a user data segment input in the virtual die queue is programmed according to two program schemes, thereby quickly programming the user data and the map data in the memory device and quickly updating the map data in a map cache.Type: GrantFiled: March 13, 2020Date of Patent: November 23, 2021Assignee: SK hynix Inc.Inventors: Young Guen Choi, Dong Ham Yim, Dae Hoon Jang, Young Hoon Cha
-
Patent number: 11175409Abstract: In a method for accurately estimating gait characteristics of a user, first parameters indicative of user movement, including a GNSS-derived speed and step count, are monitored. Values of the first parameters are processed to determine values of second parameters indicative of movement of the user. The processing includes using values of at least one monitored parameter to generate one or more inputs to an estimator (e.g., Kalman filter) having the second parameters as estimator states. At least two of the second parameters are collectively indicative of a mapping between step frequency and step length of the user. A graphical user interface may display values of at least one of the second parameters, and/or at least one parameter derived from one or more of the second parameters.Type: GrantFiled: December 30, 2019Date of Patent: November 16, 2021Assignee: GOOGLE LLCInventors: Frank Van Diggelen, Ke Xiao, Gustavo Moura, Wyatt Riley
-
Patent number: 11175963Abstract: A method and a device for distributing partitions of a sequence of partitions on the cores of a multicore processor are provided. The method makes it possible to identify parameters characterizing the hardware architecture of a multicore processor, and parameters characterizing an initial ordering of the partitions of a sequence; and then to profile and classify each partition of the sequence in order to assign the execution of each partition to a core of the multicore processor while maintaining the initial sequential ordering of the partitions.Type: GrantFiled: July 26, 2017Date of Patent: November 16, 2021Assignee: THALESInventors: Jimmy Le Rhun, Daniel Gracia Perez, Sylvain Girbal
-
Patent number: 11159450Abstract: A method for nonintrusive network load generation may include determining available resources in a distributed computing system, where the distributed computing system includes a plurality of computing devices and a target deployment. Based on an amount of available resources between the target deployment and a plurality of source computing devices, the plurality of source computing devices may be selected to generate a network load directed from the plurality of source computing devices to the target deployment. The plurality of source computing devices may be a subset of the plurality of computing devices in the distributed computing system. A network-traffic generator service may be provided to the plurality of source computing devices in order to generate the network load directed from the plurality of source computing devices to the target deployment. The performance of the distributed computing system in response to the generated network load may be monitored.Type: GrantFiled: March 2, 2020Date of Patent: October 26, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Marwan E. Jubran, Aleksandr Mikhailovich Gershaft, Weiping Hu
-
Patent number: 11157311Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.Type: GrantFiled: September 27, 2019Date of Patent: October 26, 2021Assignee: Intel CorproationInventors: Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Thomas Willhalm, Timothy Verrall
-
Patent number: 11150951Abstract: A computer-implemented method, a computer system and a computer program product for releasable resource-based preemptive scheduling. One or more currently running workloads are determined to be preempted by a pending workload. Releasable resources from the one or more currently running workloads meet required resources of the pending workload. The pending workload is dispatched so that it uses at least part of the releasable resources from the one or more currently running workloads to run.Type: GrantFiled: November 20, 2018Date of Patent: October 19, 2021Assignee: International Business Machines CorporationInventors: Xiu Qiao Li, Zhaohui Ding, Xun Pan, Rong Song Shen, Michael Spriggs
-
Patent number: 11138043Abstract: Systems and methods for outlier mitigation in safety-critical systems are provided. In one embodiment, a computer system comprises: a processor comprising one or more processing cores; a scheduling function that schedules the execution of applications, the applications each comprise threads; a contingency budgeting manager (CBM) that defines at least a first pre-determined set of threads from the threads of the applications and assigns a contingency budget pool to the first set of threads. The first set of threads are each scheduled by the scheduling function to execute on a first processing core. The CBM is further configured to monitor execution of each of the threads of the first set of threads to identify when a first thread is an execution time outlier. When the CBM determines that the first thread is an execution time outlier, it allocates additional thread execution time from the contingency budget pool to the first thread.Type: GrantFiled: May 23, 2019Date of Patent: October 5, 2021Assignee: Honeywell International s.r.oInventors: Pavel Zaykov, Larry James Miller, Srivatsan Varadarajan, Chittaranjan Kashiwar
-
Patent number: 11140005Abstract: A communication system includes a master node and a slave node connected to the master node via a bus. The master node transmits a header including identification information assigned to a designated function in accordance with a schedule preset individually for the designated function. The slave node has a plurality of functions for each of which a response different from each other is transmitted. The slave node returns the response related to the designated function upon receipt of the header when the designated function assigned with the identification information included in the header is one of the plurality of functions of the slave node.Type: GrantFiled: November 12, 2019Date of Patent: October 5, 2021Assignee: DENSO CORPORATIONInventor: Kenji Kato
-
Patent number: 11133991Abstract: Consolidating events to execute objects to extract, transform, and load data from source systems to a structured data store. An event manager process executing on a server runtime utilizes one or more event properties to determine which events can be consolidated to reduce unnecessary processor utilization.Type: GrantFiled: August 6, 2019Date of Patent: September 28, 2021Assignee: AVEVA SOFTWARE, LLCInventors: Ravi Kumar Herunde Prakash, Sami Majed Abbushi
-
Patent number: 11106495Abstract: Various embodiments are generally directed to techniques for partitioning parallelizable tasks into subtasks for processing. Some embodiments are particularly directed to dynamically determining chunk sizes to use in partitioning tasks, such as parallel loops or divide and conquer algorithm tasks, into subtasks based on the probability of a priority task source introducing a high-priority task. For example, a measurement signal received from a probe indicating an operational characteristic associated with a priority task source may be used to generate an estimate of the probability of a priority task source introducing a high-priority task. In such examples, the estimate may be used to determine a chunk size for a parallelizable task and the parallelizable task may be partitioned into a plurality of subtasks based on the chunk size and the subtasks may be assigned, for execution, to at least one task queue in a task pool.Type: GrantFiled: June 13, 2019Date of Patent: August 31, 2021Assignee: INTEL CORPORATIONInventors: Michael Voss, Pablo Reble, Aleksei Fedotov
-
Patent number: 11102283Abstract: A method comprising discovering workload attributes and identify dependencies, receiving utilization performance measurements including memory utilization measurements of at least a subset of workloads, grouping workloads based on the workload attributes, the dependencies, and the utilization performance measurements into affinity groups, determining at least one representative synthetic workload for each affinity group, each representative synthetic workload including a time slice of a predetermined period of time when there are maximum performance values for any number of utilization performance measurements among virtual machines of that particular affinity group, determining at least one cloud service provider (CSP)'s cloud services based on performance of the representative synthetic workloads, and generating a report for at least one of the representative synthetic workloads, the report identifying the at least one of the representative synthetic workloads and the at least one CSP's cloud services incluType: GrantFiled: February 18, 2020Date of Patent: August 24, 2021Assignee: Virtual Instruments Worldwide, Inc.Inventors: Rick Haggart, Rangaswamy Jagannathan, Michael Bello, Ricardo A. Negrete, Elizaveta Tavastcherna, Vitoo Suwannakinthorn
-
Patent number: 11095570Abstract: The described technology is generally directed towards automatically scaling segments of a stream of data. According to an embodiment, a system can comprise a memory that can store computer executable components, and a processor that can execute the computer executable components stored in the memory. The computer executable components can comprise a predictor that can predict a future communication load of a stream of data provided by a stream provider device, the stream comprising segments of a size. The computer executable components can further comprise a size changer that can receive an indication that a present communication load of the stream of data has transitioned a threshold, and change the size of a segment of the segments based on the indication and the future communication load of the stream of data.Type: GrantFiled: February 21, 2019Date of Patent: August 17, 2021Assignee: EMC IP HOLDING COMPANY LLCInventors: Jeff Wu, Ben Wang