Process Scheduling Patents (Class 718/102)
-
Timer task ownership determination in a cluster based on a common cluster member selection algorithm
Patent number: 11354154Abstract: Distributed timer task execution management is disclosed. A cluster member generates a first timer task that can be executed on any cluster member of a plurality of cluster members including the first cluster member that composes a cluster. A first timer task schedule that identifies at least one future point in time at which the first timer task is to be executed is generated. A second cluster member of the plurality of cluster members is selected as a cluster member owner for the first timer task that is to schedule the first timer task and to execute the first timer task at the at least one future point in time. The first timer task and the first timer task schedule are transferred to the second cluster member.Type: GrantFiled: August 20, 2019Date of Patent: June 7, 2022Assignee: Red Hat, Inc.Inventors: Paul M. Ferraro, Radoslav Husar -
Patent number: 11356334Abstract: A method is provided for sparse communication in a parallel machine learning environment. The method includes determining a fixed communication cost for a sparse graph to be computed. The sparse graph is (i) determined from a communication graph that includes all the machines in a target cluster of the environment, and (ii) represents a communication network for the target cluster having (a) an overall spectral gap greater than or equal to a minimum threshold, and (b) certain information dispersal properties such that an intermediate output from a given node disperses to all other nodes of the sparse graph in lowest number of time steps given other possible node connections. The method further includes computing the sparse graph, based on the communication graph and the fixed communication cost. The method also includes initiating a propagation of the intermediate output in the parallel machine learning environment using a topology of the sparse graph.Type: GrantFiled: May 15, 2018Date of Patent: June 7, 2022Inventors: Asim Kadav, Erik Kruus
-
Patent number: 11347566Abstract: Methods and systems are provided for supporting operation of a plurality of software plugins of an IHS (Information Handling System). Incoming plugin commands are received and stored to a queue of a plurality of progressively weighted queues. The weighted queue is selected for storing the incoming plugin command based on a time constraint associated with the command. A proximate command is selected for processing from a queue of the plurality of weighted queues based on a weighted time for processing the proximate command. A recipient plugin of the proximate command is determined. Any plugin groups that the recipient is a member of are identified. The plugins of the first plugin group, including the recipient plugin, are activated to allocate use of IHS resources to the activated plugin.Type: GrantFiled: February 6, 2020Date of Patent: May 31, 2022Assignee: Dell Products, L.P.Inventors: Vivek Viswanathan Iyer, Srikanth Kondapi, Abhinav Gupta
-
Patent number: 11340952Abstract: A function performance trigger for a cloud computing system is disclosed. A function is to be run in response to the trigger. A template for a function in the cloud computing system is generated. The trigger is defined for the function based upon a performance parameter of the cloud computing system.Type: GrantFiled: November 11, 2019Date of Patent: May 24, 2022Assignee: Microsoft Technology Licensing, LLCInventor: Hesham Yassin
-
Patent number: 11340940Abstract: An application may be migrated from a first to a second computing system. Configuration parameter values associated with executing the migrated application on the second computing system may be determined by computational optimization based on configuration parameter values and/or monitored performance metrics associated with the application on the first computing system. Configuration parameter values associated with executing the migrated application on the second computing system may be determined by performing simulations of the migrated application configured for execution on the second computing system based on multiple sets of configuration parameter values, monitoring performance metrics associated with the simulations, and performing computational optimization based on the multiple sets of configuration parameter values and monitored performance metrics associated with the simulations.Type: GrantFiled: July 2, 2020Date of Patent: May 24, 2022Assignee: Bank of America CorporationInventors: Anuja Savant, Pramodh Siril Rao Chennamaneni, Sasidhar Purushothaman, Alla Piltser, Zaheeruddin Mohammed
-
Patent number: 11334391Abstract: A method of adjusting a set of resources allocated for a job includes analyzing, by a job tuning module, an intermediate result of a job. Processing the job includes processing a first iteration of a task and a second iteration of the same task. Additionally, the intermediate result is a result of the first iteration of the task, and the job is allocated a first set of resources during processing of the first iteration of the task. The method also includes sending a notification to a scheduler that causes the scheduler to adjust the first set of resources allocated to the job to a second set of resources for processing the second iteration of the task. The job may be allocated the second set of resources during processing of the second iteration of the task.Type: GrantFiled: April 17, 2017Date of Patent: May 17, 2022Assignee: RED HAT, INC.Inventors: Huamin Chen, Jay Vyas
-
Patent number: 11334627Abstract: A computer-processor-implemented data processing method comprises: a computer processor executing instances of one or more processing functions, each instance of a processing function having an associated function-call identifier; and in response to initiation of execution by the computer processor of a given processing function instance configured to modify one or more pointers of a partitioned acyclic data structure: the computer processor storing the function-call identifier for that processing function instance in a memory at a storage location associated with the partitioned acyclic data structure; for a memory location which stores data representing a given pointer of the partitioned acyclic data structure, the computer processor defining a period of exclusive access to at least that memory location by applying and subsequently releasing an exclusive tag for at least that memory location; and the computer processor selectively processing the given pointer during the period of exclusive access in dependeType: GrantFiled: July 12, 2019Date of Patent: May 17, 2022Assignee: Arm LimitedInventor: Brendan James Moran
-
Patent number: 11321265Abstract: A method of transferring data from a first bus to a second bus across an asynchronous interface using an asynchronous bridge. The bridge comprises a bus slave module, connected to the first bus, comprising a forward-channel initiator in a first power and/or clock domain; and a bus master module, connected to the second bus, comprising a forward-channel terminator in a second power and/or clock domain. The forward-channel initiator and terminator are in communication to form a forward lockable mutex for arbitrating access to signals used to transfer data from the first domain to the second domain. If the mutex is locked, a forward data channel is used to transfer data between the domains. Otherwise if the mutex is unlocked, the forward channel initiator toggles a status request signal and the forward channel terminator toggles a status acknowledge signal in response, the mutex thereby becoming locked.Type: GrantFiled: June 26, 2019Date of Patent: May 3, 2022Assignee: Nordic Semiconductor ASAInventor: Berend Dekens
-
Patent number: 11321118Abstract: In one embodiment, a method includes empirically analyzing, by a computer cluster comprising a plurality of computers, a set of active reservations and a current set of consumable resources belonging to a class of consumable resources. Each active reservation is of a managed task type and comprises a group of one or more tasks task requiring access to a consumable resource of the class. The method further includes, based on the empirically analyzing, clocking the set of active reservations each clocking cycle. The method also includes, responsive to the clocking, sorting, by the computer cluster, a priority queue of the set of active reservations.Type: GrantFiled: November 30, 2012Date of Patent: May 3, 2022Assignee: MessageOne, Inc.Inventor: Jon Franklin Matousek
-
Patent number: 11321125Abstract: In a multitask computing system, there are multiple tasks include a first task, a second task, and a third task, and the first task has a higher priority than that of the second task and the third task. A method including raising the priority of the second task that shares a first critical section with the first task and is accessing the first critical section when the first task is blocked due to failure to access the first critical section; determining whether there is a third task that shares a second critical section with the second task and is accessing the second critical section; and raising, when the third task is present, the priority of the third task. The techniques of the present disclosure prevent a low-priority third task from delaying the execution of a second task, thus avoiding the priority inversion caused by the delayed execution of a high-priority first task.Type: GrantFiled: December 24, 2019Date of Patent: May 3, 2022Assignee: Alibaba Group Holding LimitedInventors: Lingjun Chen, Bin Wang, Liangliang Zhu, Xu Zeng, Zilong Liu, Junjie Cai
-
Patent number: 11321147Abstract: A technique for determining when it is safe to use scheduler lock-acquiring wakeups to defer quiescent states in real-time preemptible read-copy update (RCU). A determination may be made whether a deferred quiescent-state reporting request that defers the reporting of an RCU quiescent state on behalf of a target computer task is warranted. If so, it may be determined whether a previous deferred quiescent-state reporting request on behalf of the target computer task remains pending. A request may be issued for deferred quiescent-state report processing that reports a deferred quiescent state. The request for deferred quiescent-state report processing may be issued in a manner selected according to a result of the determining whether a previous deferred quiescent-state reporting request remains pending.Type: GrantFiled: August 29, 2019Date of Patent: May 3, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Paul E. McKenney
-
Patent number: 11314870Abstract: There is provided a method and system for an advanced endpoint protection. With this methodology, when a file is requested to be executed on any endpoint, all intelligence sources would be checked to decide if that file has any known or potential vulnerability associated with it. If there is any information about any known or potential vulnerability, it would be launched inside the secure container to isolate the all resource usage of that application from the rest of the known good and secure applications in order to achieve the secure computing environment on an endpoint.Type: GrantFiled: March 13, 2018Date of Patent: April 26, 2022Inventors: Melih Abdulhayoglu, Ilker Simsir
-
Patent number: 11315007Abstract: An apparatus to facilitate workload scheduling is disclosed. The apparatus includes one or more clients, one or more processing units to processes workloads received from the one or more clients, including hardware resources and scheduling logic to schedule direct access of the hardware resources to the one or more clients to process the workloads.Type: GrantFiled: July 1, 2020Date of Patent: April 26, 2022Assignee: Intel CorporationInventors: Liwei Ma, Nadathur Rajagopalan Satish, Jeremy Bottleson, Farshad Akhbari, Eriko Nurvitadhi, Chandrasekaran Sakthivel, Barath Lakshmanan, Jingyi Jin, Justin E. Gottschlich, Michael Strikland
-
Patent number: 11307805Abstract: A disk drive comprises non-volatile rotatable media and a controller operatively coupled to the non-volatile rotatable media. The controller is configured to receive a series of host commands to be executed by the controller and generate a command execution sequence comprising the series of host commands. A task manager, integral or coupled to the controller, is configured to receive a plurality of background tasks comprising a least two priority background tasks to be executed by the controller along with execution of the series of host commands, and insert one or more of the at least two priority background tasks into the command execution sequence while maintaining a specified ratio of priority background task execution and host command execution substantially constant. The controller is configured to execute the command execution sequence with the one or more inserted priority background tasks.Type: GrantFiled: May 29, 2020Date of Patent: April 19, 2022Assignee: Seagate Technology LLCInventors: Xiong Liu, Jin Quan Shen
-
Patent number: 11310324Abstract: A method, computer program product, and computer system for receiving, at a computing device, information associated with an entity from one or more social media sites. One or more attributes for the information associated with the entity is identified. A relevance profile associated with the one or more attributes is generated. A plurality of posts from the one or more social media sites is identified, wherein at least a portion of the plurality of posts includes at least a portion of the one or more attributes for the information associated with the entity. At least the portion of the plurality of posts is ordered on a display based upon, at least in part, the relevance profile associated with the one or more attributes.Type: GrantFiled: February 4, 2013Date of Patent: April 19, 2022Assignee: Twitter, Inc.Inventors: Patrick A. Kinsel, Alexander P. Lambert, Simon S. Yun, Alexander James Jenkins, Jeffrey Lupien, Keh-Li Sheng
-
Patent number: 11307986Abstract: Systems and methods for dynamically placing data in a hybrid memory structure are provided. A machine learning (ML)-based, adaptive tiered memory system can actively monitor application memory to dynamically place the right data in the right memory tier at the right time. The memory system can use reinforcement learning to perform dynamic tier placement of memory pages.Type: GrantFiled: June 10, 2021Date of Patent: April 19, 2022Assignee: THE FLORIDA INTERNATIONAL UNIVERSITY BOARD OF TRUSTEESInventors: Adnan Maruf, Janki Bhimani, Ashikee Ghosh, Raju Rangaswami
-
Patent number: 11308162Abstract: Systems and methods are provided for assigning client requests to one or more computer-implemented knowledge/database servers. Each server stores data as a directed acyclic graph of datums connected with a single type of relationship. The system includes a plurality of clients coupled to at least one router, wherein each client includes a graphical user interface and a processor configured to analyze inputted data, a plurality of routers configured to assign requests input though the plurality of clients to a plurality servers, at least one logger configured that includes a storage medium and is configured to store the requests, and a plurality of servers configured to perform tasks indicated by the requests.Type: GrantFiled: January 15, 2020Date of Patent: April 19, 2022Inventor: Ashraf Azmi
-
Patent number: 11307864Abstract: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.Type: GrantFiled: November 28, 2019Date of Patent: April 19, 2022Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.Inventors: Tianshi Chen, Lei Zhang, Shaoli Liu
-
Patent number: 11301297Abstract: A processing system includes at least one core, at least one accelerator function unit (AFU), a microcontroller, and a memory access unit. The AFU and the core share a plurality of virtual addresses to access a memory. The microcontroller is coupled between the core and the AFU. The core develops and stores a task in one of the virtual addresses. The microcontroller analyzes the task and dispatches the task to the AFU. The AFU accesses the virtual address indicating where the task is stored through the memory access unit to executes the task.Type: GrantFiled: September 3, 2019Date of Patent: April 12, 2022Assignee: SHANGHAI ZHAOXIN SEMICONDUCTOR CO., LTD.Inventors: Xiaoyang Li, Chen Chen, Zongpu Qi, Tao Li, Xuehua Han, Wei Zhao, Dongxue Gao
-
Patent number: 11301434Abstract: A data management system (1) for managing a data store (6), the system comprising: a central control module (2) configured to receive a request and generate a task using the request; a state store module (26) coupled to the central control module (2) and configured to store the task generated by the central control module (2), wherein the state store module (26) is further configured to store state information indicative of a state of the data store (6) and configured to output the stored task in response to the state information; and an enactor module (31) which is configured to action the task output from the state store module (26) by generating an enactor output command that at least partly corresponds to the task which, when communicated to the data store (6), causes the data store (6) to perform an action related to data stored in the data store (6).Type: GrantFiled: March 23, 2018Date of Patent: April 12, 2022Assignee: PIXIT MEDIA LIMITEDInventors: Jeremy Tucker, John Leedham, Christopher Oates
-
Patent number: 11301291Abstract: A management client acquires, from a management server, information about a batch task corresponding to a batch task execution instruction received from the management server, divides a task of each stage defined in the batch task into subtasks for respective network devices as execution targets, on a basis of the information about the batch task, executes the subtasks in parallel in the network devices as execution targets, and notifies the management server of an execution result.Type: GrantFiled: December 10, 2019Date of Patent: April 12, 2022Assignee: Canon Kabushiki KaishaInventor: Shohei Baba
-
Patent number: 11301180Abstract: An information processing apparatus includes a process request history registration unit that registers at least one of information, which indicates that a current process request is a redo process request, or information, which indicates that a past process request pertaining to a target document is an erroneous process request, in process request history in a case where a process setting for the past process request pertaining to the target document, which is a past document identical or similar to a current document that is a target of the current process request, included in the process request history including the process setting for the past process request and information which indicates the past document that is a target of the past process request, is different from a process setting for the current process request.Type: GrantFiled: November 29, 2018Date of Patent: April 12, 2022Assignee: FUJIFILM Business Innovation Corp.Inventor: Kazutaka Saitoh
-
Patent number: 11294934Abstract: A command processing method to reduce a system delay and system complexity while ensuring system consistency. A server receives from a client a target request that carries a target command. The server uses a current time as a target timestamp of the target request, adds a local associated command corresponding to the target context number and a local conflicted command corresponding to the target command to a target dependency set, and forwards the target request to a replica server. The server updates the target dependency set according to a feedback from the replica server, and stores an updated target dependency set synchronously with the replica server. And the server determines a target execution sequence of the target command and each command in the updated target dependency set.Type: GrantFiled: January 11, 2018Date of Patent: April 5, 2022Assignee: Huawei Technologies Co., Ltd.Inventors: Yili Gong, Wentao Ma, Huihua Shi
-
Patent number: 11295204Abstract: Architectures for multicore neuromorphic systems are provided. In various embodiments, a neural network description is read. The neural network description describes a plurality of logical cores. A plurality of precedence relationships are determined among the plurality of logical cores. Based on the plurality of precedence relationships, a schedule is generated that assigns the plurality of logical cores to a plurality of physical cores at a plurality of time slices. Based on the schedule, the plurality of logical cores of the neural network description are executed on the plurality of physical cores.Type: GrantFiled: January 6, 2017Date of Patent: April 5, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Dharmendra S. Modha
-
Patent number: 11294890Abstract: Systems, methods, and devices for batch ingestion of data into a table of a database. A method includes determining a notification indicating a presence of a user file received from a client account to be ingested into a database. The method includes identifying data in the user file and identifying a target table of the database to receive the data in the user file. The method includes generating an ingest task indicating the data and the target table. The method includes assigning the ingest task to an execution node of an execution platform, wherein the execution platform comprises a plurality of execution nodes operating independent of a plurality of shared storage devices collectively storing database data. The method includes registering metadata concerning the target table in a metadata store after the data has been fully committed to the target table by the execution node.Type: GrantFiled: March 26, 2019Date of Patent: April 5, 2022Assignee: Snowflake Inc.Inventors: Jiansheng Huang, Jiaxing Liang, Scott Ziegler, Haowei Yu, Benoit Dageville, Varun Ganesh
-
Patent number: 11288221Abstract: A graph processing optimization method that addresses the problems such as the low computation-to-communication ratio in graph environments, and high communication overhead as well as load imbalance in heterogeneous environments for graph processing. The method reduces communication overhead between accelerators by optimizing graph partitioning so as to improve system scalability.Type: GrantFiled: June 9, 2020Date of Patent: March 29, 2022Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Xiaofei Liao, Fan Zhang, Long Zheng, Hai Jin, Zhiyuan Shao
-
Patent number: 11288092Abstract: Time delays used in a reconciliation process can be dynamically adjusted. For example, a system can receive a request from a client for a time delay value. The time delay value can be a timespan in which to wait between a first execution and a second execution of reconciliation software. The request can indicate a result of the first execution. In response to receiving the request, the system can select an algorithm from among a group of algorithms based on the result of the first execution. The system can then determine the time delay value by executing the algorithm. The system can transmit the time delay value to the client, which can wait for the timespan prior to initiating the second execution of the reconciliation software.Type: GrantFiled: July 20, 2020Date of Patent: March 29, 2022Assignee: Red Hat, Inc.Inventor: Aiden Keating
-
Patent number: 11281496Abstract: Embodiments are generally directed to thread group scheduling for graphics processing. An embodiment of an apparatus includes a plurality of processors including a plurality of graphics processors to process data; a memory; and one or more caches for storage of data for the plurality of graphics processors, wherein the one or more processors are to schedule a plurality of groups of threads for processing by the plurality of graphics processors, the scheduling of the plurality of groups of threads including the plurality of processors to apply a bias for scheduling the plurality of groups of threads according to a cache locality for the one or more caches.Type: GrantFiled: March 15, 2019Date of Patent: March 22, 2022Assignee: INTEL CORPORATIONInventors: Ben Ashbaugh, Jonathan Pearce, Murali Ramadoss, Vikranth Vemulapalli, William B. Sadler, Sungye Kim, Marian Alin Petre
-
Patent number: 11283868Abstract: One method involves configuring one or more computing resources (selected according to a workflow that specifies an application to be executed) of a computing node and executing, using the one or more computing resources, at least a portion of an application al the computing node. At least one of the one or more computing resources is a reconfigurable logic device, and the configuring, at least in part, configures the reconfigurable logic device according to a configuration script of the workflow. The executing comprises performing one or more operations. The one or more operations are performed by the reconfigurable logic device. The reconfigurable logic device is configured to perform the one or more operations by virtue of having been configured according to the configuration script.Type: GrantFiled: February 26, 2019Date of Patent: March 22, 2022Assignee: AGARIK SASInventors: Stephen M. Hebert, Robert L. Sherrard, Leonardo E. Reiter
-
Patent number: 11281281Abstract: Circuitry is provided to control a performance level of a processing device depending on two or more operating points of the processing device. An operating point has a corresponding frequency and a corresponding voltage. The performance-level control circuitry arranged to cross-multiply parameters corresponding to a first operating point and a second, different operating point of the processing device. A relative energy expenditure of the first operating point and the second operating point is determined based on the cross multiplication. An operating point of the processing device is selected depending on the determined relative energy expenditure. An apparatus having the performance level control circuitry, machine readable instructions for implementing the performance level control and a corresponding method are also provided.Type: GrantFiled: February 28, 2018Date of Patent: March 22, 2022Assignee: Intel CorporationInventors: Jayanth M. Devaraju, Vivek De, Sriram Vangal
-
Patent number: 11275623Abstract: Systems, devices, media, and methods are presented for throttling (i.e., adjusting) the workload of an application (e.g., number of task requests) in order to improve processor core usage within a heterogeneous multiprocessor system. When high-performance processing is beneficial to the application, the number of task requests may be increased in order to have high-performance processor cores within the heterogeneous multiprocessor system core processor perform the tasks. On the other hand, when high-performance processing is not beneficial, the number of task requests may be decreased in order to have low-performance processor cores within the heterogeneous multiprocessor system perform the tasks. Processor core usage is monitored, and the number of tasks being performed are adjusted to match the processor core usage to a target processor core usage for functions the application is performing.Type: GrantFiled: May 30, 2019Date of Patent: March 15, 2022Assignee: Snap Inc.Inventors: Michael Cieslak, Jiayao Yu, Kai Chen, Farnaz Azmoodeh, Michael David Marr, Jun Huang, Zahra Ferdowsi
-
Patent number: 11275621Abstract: A device and a method for operating a computer system, a job to be processed by the computer system being assignable to a task from a plurality of tasks for processing, the job to be processed being assigned as a function of a result of a comparison, a first value being compared to a second value in the comparison, the first value characterizing a first computing expenditure, which is to be expected in the computer system in the processing of the job to be processed in a first task of the plurality of tasks, the second value characterizing a second computing expenditure, which is to be expected in the computer system in the processing of the job to be processed in a second task of the plurality of tasks.Type: GrantFiled: November 8, 2017Date of Patent: March 15, 2022Assignee: Robert Bosch GmbHInventors: Bjoern Saballus, Elmar Ott, Jascha Friedrich, Juergen Bregenzer, Simon Kramer, Michael Pressler, Sebastian Stuermer
-
Patent number: 11269692Abstract: Techniques are disclosed for efficiently sequencing operations performed in multiple threads of execution in a computer system. In one set of embodiments, sequencing is performed by receiving an instruction to advance a designated next ticket value, incrementing the designated next ticket value in response to receiving the instruction, searching a waiters list of tickets for an element having the designated next ticket value, wherein searching does not require searching the entire waiters list, and the waiters list is in a sorted order based on the values of the tickets, and removing the element having the designated next ticket value from the list using a single atomic operation. The element may be removed by setting a waiters list head element, in a single atomic operation, to refer to an element in the list having a value based upon the designated next ticket value.Type: GrantFiled: March 26, 2019Date of Patent: March 8, 2022Assignee: Oracle International CorporationInventor: Oleksandr Otenko
-
Patent number: 11269972Abstract: A computer-implemented method includes a webpage design server comparing a current date to a start date associated with a version of a webpage and when the current date is after the start date, the webpage design server automatically altering a webpage delivery system so that the version of the webpage is returned by the webpage delivery system when the webpage is requested.Type: GrantFiled: May 31, 2016Date of Patent: March 8, 2022Assignee: Target Brands, Inc.Inventors: James Patrick Tully, Marcus Malcolm Rosenow, Jorge Alberto Trujillo, Dakota Reese Brown, Matthew Darren Dordal, Christopher Edward Johnson, Shannon Blandford
-
Patent number: 11269527Abstract: Concepts for remote storage of data are presented. Once such concept is a system comprising: a primary storage controller; and a secondary storage controller of a remote data storage system. The primary storage controller is configured to determine a service characteristic of data storage to or data retrieval from the remote data storage system and to communicate service performance signals to the secondary storage controller based on the determined service characteristic. The secondary storage controller is configured to receive service performance signals from the primary storage controller, to compare the received service performance signals with a service requirement so as to determine a service comparison result, and to control data storage to or data retrieval from the remote data storage system based on the service comparison result.Type: GrantFiled: August 8, 2019Date of Patent: March 8, 2022Assignee: International Business Machines CorporationInventors: Miles Mulholland, Alex Dicks, Dominic Tomkins, Eric John Bartlett
-
Patent number: 11263130Abstract: A system and related method for managing memory in data processing comprises allocating each of a plurality of application containers a respective portion of a memory communicatively coupled to a plurality of processing units. The method further comprises allocating each of the plurality of application containers a respective group of the plurality of processing units and allocating, to each of the plurality of application containers, nursery and tenured heap spaced in the memory. The method then comprises performing, responsive to a request from an application container, garbage collection from the nursery and tenured heap spaces allocated to the application container.Type: GrantFiled: July 11, 2019Date of Patent: March 1, 2022Assignee: International Business Machines CorporationInventors: Howard Hellyer, Adam John Pilkington, Richard Chamberlain
-
Patent number: 11263093Abstract: Embodiments of the present disclosure relate to method, device and computer program product for job management. The method comprises: obtaining an execution plan associated with a plurality of backup jobs including a target backup job, the execution plan at least indicating a size of backup data and start times of the plurality of backup jobs; determining, based on the execution plan, a first set of backup jobs to be executed in parallel at a start time of the target backup job; determining a predicted backup speed of executing the first set of backup jobs in parallel at the start time of the target backup job; and determining, at least based on the predicted backup speed and the size of the backup data of the target backup job, time required for executing the target backup job. Accordingly, the time required for executing the backup jobs can be more accurately predicted.Type: GrantFiled: February 24, 2020Date of Patent: March 1, 2022Assignee: EMC IP HOLDING COMPANY LLCInventors: Jun Tang, Yi Wang, Qingxiao Zheng
-
Patent number: 11258669Abstract: Certain embodiments described herein are generally directed to techniques for computing grouping object memberships in a network. Embodiments include receiving a plurality of network configuration updates. Embodiments include identifying delta updates to a plurality of grouping objects based on the plurality of configuration updates. Embodiments include determining a parallel processing arrangement for the delta updates based on dependencies in a directed graph comprising representations of the plurality of grouping objects. Embodiments include processing the delta updates according to the parallel processing arrangement in order to determine memberships of the plurality of grouping objects. Embodiments include distributing one or more updates to one or more endpoints based on the memberships of the plurality of grouping objects.Type: GrantFiled: July 14, 2020Date of Patent: February 22, 2022Assignee: VMWARE, INC.Inventors: Aayush Saxena, Aravinda Kidambi Srinivasan, Harold Vinson C. Lim, Shekhar Chandrashekhar
-
Patent number: 11243751Abstract: A proxy compiler may be used within a native execution environment to enable execution of non-native instructions from a non-native execution environment as if being performed within the native execution environment. In particular, the proxy compiler coordinates creation of a native executable that is uniquely tied to a particular non-native image at the creation time of the non-native image. This allows a trusted relationship between the native executable and the non-native image, while avoiding a requirement of compilation/translation of the non-native instructions for execution directly within the native execution environment.Type: GrantFiled: October 16, 2020Date of Patent: February 8, 2022Assignee: Unisys CorporationInventors: Andrew Ward Beale, Anthony P. Matyok, Clark C. Kogen, David Strong
-
Patent number: 11243777Abstract: Described is a content management system (CMS) where a primary CMS is arranged to provide a command pipeline along with associated timing information while an alternative CMS is arranged to replay the commands from the command pipeline in an order based on the associated timing information to synchronize the alternative CMS to the primary CMS.Type: GrantFiled: May 18, 2018Date of Patent: February 8, 2022Assignee: NUXEO CORPORATIONInventors: Thierry Delprat, Damien Metzler, Benoit Delbosc
-
Patent number: 11244056Abstract: A trusted threat-aware microvisor may be deployed as a module of a trusted computing base (TCB). The microvisor is illustratively configured to enforce a security policy of the TCB, which may be implemented as a security property of the microvisor. The microvisor may manifest (i.e., demonstrate) the security property in a manner that enforces the security policy. Trustedness denotes a predetermined level of confidence that the security property is demonstrated by the microvisor. The predetermined level of confidence is based on an assurance (i.e., grounds) that the microvisor demonstrates the security property. Trustedness of the microvisor may be verified by subjecting the TCB to enhanced verification analysis configured to ensure that the TCB conforms to an operational model with an appropriate level of confidence over an appropriate range of activity. The operational model may then be configured to analyze conformance of the microvisor to the security property.Type: GrantFiled: June 18, 2018Date of Patent: February 8, 2022Assignee: FireEye Security Holdings US LLCInventors: Osman Abdoul Ismael, Hendrik Tews
-
Patent number: 11237867Abstract: A data processing apparatus (10) includes a receiver (120), a specifier (142), and a task controller (141). The receiver (120) receives a setting of a process flow defining subprocesses that are sequentially executed with respect to data output from a device (21). The specifier (142) specifies, based on the setting received by the receiver (120), processing units (130) for execution of the subprocesses. The task controller (141) determines, based on the setting received by the receiver (120), and launches the tasks in accordance with the order, an order for launching tasks for achievement of the processing units (130) specified by the specifier (142).Type: GrantFiled: April 27, 2018Date of Patent: February 1, 2022Assignee: MITSUBISHI ELECTRIC CORPORATIONInventors: Osamu Nasu, Jijun Jin, Ryo Kashiwagi
-
Patent number: 11228362Abstract: A method includes receiving request information obtained from an external user. The request information is associated with a task to be completed by at least one satellite asset among a plurality of satellite assets, where the satellite assets are grouped into a plurality of constellations and each of the constellations is associated with a corresponding scheduler among a plurality of schedulers. The method also includes assigning the task to a queue. The method further includes determining at least one specified scheduler to schedule the task at the at least one satellite asset. In addition, the method includes sending instructions to the at least one specified scheduler for performing the task by the at least one satellite asset.Type: GrantFiled: April 17, 2020Date of Patent: January 18, 2022Assignee: Raytheon CompanyInventors: Jeffrey D. Schloemer, Thomas W. Thorpe
-
Patent number: 11226852Abstract: Described is a novel method of inter-process communication used in one example in a surveillance system whereby multiple input processes communicate surveillance data to a reader process that consumes the data from the input processes. A locking mechanism is provided to reserve a reservable portion of queue metadata which comprises queue pointer(s) such that only one process may move the queue pointer(s) at a time. Reservation is provided with little or no kernel operations such that reservation costs are negligible. Arbitrary size queue slots may be reserved by moving the points. Writing and reading into the queue is done outside of the locking mechanism allowing multiple processes to access and work in the queue simultaneously leading to a rapid queue synchronization mechanism that requires little or no resort to expensive kernel operations.Type: GrantFiled: November 16, 2017Date of Patent: January 18, 2022Inventor: Julien Vary
-
Patent number: 11221877Abstract: The present disclosure provides a task parallel processing method, a device, a system, a storage medium and computer equipment, which are capable of distributing and regulating tasks to be executed according to a task directed acyclic graph, and may thereby realize task parallelism of a multi-core processor and improve the efficiency of data processing.Type: GrantFiled: September 18, 2019Date of Patent: January 11, 2022Assignee: Shanghai Cambricon Information Technology Co., LtdInventors: Linyang Wu, Xiaofu Meng
-
Patent number: 11216304Abstract: A processing system includes at least one core, several accelerator function units (AFU) and a microcontroller. The core is utilized to operate several processes and develop at least one task queue corresponding to each of the processes. The processing core generates several command packets and pushes them into the corresponding task queue. The AFU executes the command packets. The microcontroller is arranged between the AFU and the core to dispatch the command packet to a corresponding AFU for execution. When the corresponding AFU executes the command packet of a specific process of the processes, the microcontroller assigns the corresponding AFU to execute other command packets in the task queue of the specific process at a higher priority.Type: GrantFiled: September 3, 2019Date of Patent: January 4, 2022Assignee: SHANGHAI ZHAOXIN SEMICONDUCTOR CO., LTD.Inventors: Wei Zhao, Xuehua Han, Fangfang Wu, Jin Yu
-
Patent number: 11216319Abstract: An Intelligent Real-Time Robot Operating System (IRT-ROS) architecture and an operation method thereof are provided. The IRT-ROS architecture includes a General-Purpose OS kernel, a Real-Time OS kernel, and an Inter-processor Interrupt interface. The General-Purpose OS kernel is configured to run a General-Purpose OS to execute a non-real-time process. The Real-Time OS kernel is configured to run a Real-Time OS to execute a real-time process. The IPI interface is connected between the General-Purpose OS kernel and the Real-Time OS kernel, and is configured to support communication between the non-real-time process and the real-time process. The AIRT-ROS architecture allows Linux and RTERS to respectively execute non-real-time process and real-time process, and to respectively respond IRQ of non-real-time devices and IRQ of real-time devices. Communications between non-real-time process and real-time process are supported.Type: GrantFiled: September 10, 2020Date of Patent: January 4, 2022Assignee: HRG INTERNATIONAL INSTITUTE FOR RESEARCH & INNOVATIONInventors: Kerui Xia, Liang Ding, Pengfei Liu, Zhenzhong Yu, Yanan Zhang, Fei Wang, Qi Hou, Taogeng Zhang
-
Patent number: 11212338Abstract: A system implements managed scaling of a processing service in response to message traffic. Producers produce messages or other data and the messages are stored in a queue or message system. On behalf of consumers of the messages, workers of a client of the queue poll the queue or message service to obtain the messages. For example, a primary worker of the client polls the queue for messages and upon receiving a message, activates a secondary worker from a pool of secondary workers to start polling the queue for message. Now both workers are obtaining messages from the queue, and both workers may activate other secondary workers, exponentially scaling the message processing service in embodiments. When a secondary worker receives an empty polling response, the secondary deactivates back to the pool. The primary thread does not deactivate, even when empty polling responses are received.Type: GrantFiled: January 23, 2018Date of Patent: December 28, 2021Assignee: Amazon Technologies, Inc.Inventor: Johan Daniel Hoyos Arciniegas
-
Patent number: 11212344Abstract: An illustrative workload management system obtains resource utilization data representing utilization of network equipment of a communication network, obtains utilities data representing information about utilities at network facilities at which the network equipment is deployed, and assigns, based on the resource utilization data and the utilities data, a workload among the network equipment deployed at the network facilities. Corresponding methods and systems are also described.Type: GrantFiled: August 31, 2020Date of Patent: December 28, 2021Assignee: Verizon Patent and Licensing Inc.Inventors: Donna L. Polehn, Patricia R. Chang, Jin Yang, Arda Aksu, Lalit R. Kotecha, Vishwanath Ramamurthi, David Chiang
-
Patent number: 11210816Abstract: Systems and methods for transitional effects in real-time rendering applications are described. Some implementations may include rendering a computer-generated reality environment in a first state using an application that includes multiple processes associated with respective objects of the computer-generated reality environment; generating a message that indicates a change in the computer-generated reality environment; sending the message to two or more of the multiple processes associated with respective objects of the computer-generated reality environment; responsive to the message, updating configurations of objects of the computer-generated reality environment to change the computer-generated reality environment from the first state to a second state; and rendering the computer-generated reality environment in the second state using the application.Type: GrantFiled: August 23, 2019Date of Patent: December 28, 2021Assignee: Apple Inc.Inventors: Xiaobo An, Peter Dollar, Eric J. Mueller, Brendan K. Duncan