Patents Examined by Jorge A Chu Joy-Davila
-
Patent number: 11500686Abstract: A solution is proposed for resource management of a software application including a plurality of software components interacting with each other. A corresponding method includes monitoring present conditions of the software components and estimating a future consumption of one or more computing resources by each software component from the present conditions of the software components; an allocation of the computing resources to the software components is then controlled accordingly. A computer program and a computer program product for performing the method are also proposed. Moreover, a system for implementing the method is proposed.Type: GrantFiled: July 31, 2020Date of Patent: November 15, 2022Assignee: International Business Machines CorporationInventors: Gabriele de Capoa, Massimo Villani
-
Patent number: 11494228Abstract: A method for scheduling jobs for the calculator includes measuring core utilization of the second-type processor, when the measured core utilization is less than a reference value, transmitting, by the first-type processor, a job suspension instruction to suspend a first job, which is currently being executed, to the second-type processor, in response to the job suspension instruction, copying data of a region occupied by the first job in a memory of the second-type processor to a main memory, copying data of a second job stored in the main memory to the memory of the second-type processor, and transmitting, by the first-type processor, an instruction to execute the second job to the second-type processor.Type: GrantFiled: October 25, 2019Date of Patent: November 8, 2022Assignee: SAMSUNG SDS CO., LTD.Inventors: Man Suk Suh, Hwan Kyun Roh, Gi Beom Pang
-
Patent number: 11481259Abstract: Distributing computation workload among computing nodes of differing computing paradigms is provided. Compute gravity of each computing node in a cloud computing paradigm and each computing node in a client network computing paradigm within an Internet of Systems is calculated. Each component part of an algorithm is distributed to an appropriate computing node of the cloud computing paradigm and client network computing paradigm based on calculated compute gravity of each respective computing node within the Internet of Systems. Computation workload of each component part of the algorithm is distributed to a respective computing node of the cloud computing paradigm and the client network computing paradigm having a corresponding component part of the algorithm for processing.Type: GrantFiled: January 7, 2020Date of Patent: October 25, 2022Assignee: International Business Machines CorporationInventors: Aaron K. Baughman, Stephen C. Hammer, Gray Cannon, Shikhar Kwatra
-
Patent number: 11461123Abstract: This disclosure describes systems, devices, and techniques for live migrating virtualized resources between the main region and edge locations. Live migration enables virtualized resources to remain operational during migration. Edge locations are typically separated from secure data centers via the Internet, a direct connection, or some other intermediate network. Accordingly, to place virtualized resources within an edge location, the virtualized resources must be migrated over a secure communication tunnel that can protect virtualized resource data during transmission over the intermediate network. The secure communication tunnel may have limited data throughput. To efficiently utilize resources of the secure communication tunnel, virtualized resource data may be transferred over the tunnel in a two-stage process.Type: GrantFiled: November 21, 2019Date of Patent: October 4, 2022Assignee: Amazon Technologies, Inc.Inventors: Oleksii Tsai, Nikolay Krasilnikov, Anton Valter, Alexey Gadalin
-
Patent number: 11442781Abstract: A method for deploying workloads in a heterogenous computing environment having multiple hosts of multiple different types and/or multiple monitors of multiple different types is disclosed. The method includes selecting a master image for deployment of a workload, wherein multiple subimages are associated with the master image, and the subimages correspond to at least some of the different types of hosts and/or the different types of monitors such that the master image is usable to deploy the workload on at least one of the hosts. The method also includes determining a host on which to deploy the workload using the master image; determining a monitor of the host to manage the workload; determining a monitor type of the monitor; determining, by an orchestration engine and based on the monitor type, a subimage that supports the first monitor; and cloning the associated resources to the host to initiate the workload thereon.Type: GrantFiled: September 18, 2019Date of Patent: September 13, 2022Assignee: International Business Machines CorporationInventor: Gerald Francis McBrearty
-
Patent number: 11442782Abstract: An apparatus comprising means for performing: at a first time, controlling whether a user is granted access to a resource based on a response of the user to a first access task, and setting one or more restrictions on granted access to the resource based on the response of the user to the first access task; at a second time, controlling whether the user is granted access to the resource based on a response of the user to a second access task, different to the first access task, and setting one or more restrictions on granted access to the resource based on the response of the user to the second access task; and initiating a change from the first access task to the second access task, wherein the initiation of the change is causally independent of the response of the user to the first access task.Type: GrantFiled: May 29, 2020Date of Patent: September 13, 2022Assignee: NOKIA TECHNOLOGIES OYInventors: Timothy Giles Beard, Christopher John Wright
-
Patent number: 11436046Abstract: A memory processor-based multiprocessing architecture and an operation method thereof are provided. The memory processor-based multiprocessing architecture includes a main processor and a plurality of memory chips. The memory chips include a plurality of processing units and a plurality of data storage areas. The processing units and the data storage areas are respectively disposed one-to-one in the memory chips. The data storage areas are configured to share a plurality of sub-datasets of a large dataset. The main processor assigns a computing task to one of the processing units of the memory chips, so that the one of the processing units accesses the corresponding data storage area to perform the computing task according to a part of the sub-datasets.Type: GrantFiled: July 5, 2019Date of Patent: September 6, 2022Assignee: Powerchip Semiconductor Manufacturing CorporationInventor: Kuan-Chow Chen
-
Patent number: 11429436Abstract: Embodiments of the present disclosure relate to a method, a device, and a computer program product for determining an execution progress of tasks. The method includes determining, according to a determination that a task is executed, whether historical execution information of the task is available. The method further includes determining the expected execution duration of the task based on the historical execution information of the task according to a determination that the historical execution information of the task is available. The method further includes determining the duration of completed execution for the task based on the time point at which execution of the task begins and the current time point. The method further includes determining the execution progress of the task based on the expected execution duration and the completed execution duration.Type: GrantFiled: May 31, 2020Date of Patent: August 30, 2022Assignee: EMC IP Holding Company LLCInventors: Xiaoliang Zhu, Ming Zhang, Jing Yu, Yongsheng Guo, Min Liu
-
Patent number: 11429455Abstract: Disclosed are various embodiments for generating recommended replacement host machines for a datacenter. The recommendations can be generated based upon an analysis of historical workload usage across the datacenter. Clusters can be generated that cluster workloads together that are similar. Purchase plans can be generated based upon the identified clusters and benchmark data regarding servers.Type: GrantFiled: June 24, 2020Date of Patent: August 30, 2022Assignee: VMware, Inc.Inventors: Yash Bhatnagar, Naina Verma, Mageshwaran Rajendran, Amit Kumar, Venkata Naga Manohar Kondamudi
-
Patent number: 11429445Abstract: Enhancement or reduction of page migration can include operations that include scoring, in a computing device, each executable of at least a first group and a second group of executables in the computing device. The executables can be related to user interface elements of applications and associated with pages of memory in the computing device. For each executable, the scoring can be based at least partly on an amount of user interface elements using the executable. The first group can be located at first pages of the memory, and the second group can be located at second pages. When the scoring of the executables in the first group is higher than the scoring of the executables in the second group, the operations can include allocating or migrating the first pages to a first type of memory, and allocating or migrating the second pages to a second type of memory.Type: GrantFiled: November 25, 2019Date of Patent: August 30, 2022Assignee: Micron Technology, Inc.Inventors: Dmitri Yudanov, Samuel E. Bradshaw
-
Patent number: 11422866Abstract: The invention relates to a computer network comprising a group of multiple computing resource infrastructures (51 to 56) associated with multiple orchestrators (41 to 46), said orchestrators being responsible for allocating the resources of this infrastructure (51 to 56) to one or more additional components of an already-hosted client application (17) and being grouped together in a swarm in which they are interconnected by a cooperation interface (3). The allocation of resources is decided by a decision method based first on evaluations distributed to the orchestrators (41 to 46), then on a consensus protocol, between the orchestrators (41 to 46), which is based on the evaluations and which is implemented at the cooperation interface (3) in order to select one of the infrastructures (51 to 56) from the group to host the additional component(s) of the client application (17).Type: GrantFiled: December 21, 2018Date of Patent: August 23, 2022Assignee: BULL SASInventors: Benoit Pelletier, Loïc Albertin
-
Patent number: 11422894Abstract: Techniques are disclosed relating to automated operations management. In various embodiments, a computer system accesses operational information that defines commands for an operational scenario and accesses blueprints that describe operational entities in a target computer environment related to the operational scenario. The computer system implements the operational scenario for the target computer environment. The implementing may include executing a hierarchy of controller modules that include an orchestrator controller module at top level of the hierarchy that is executable to carry out the commands by issuing instructions to controller modules at a next level. The controller modules may be executable to manage the operational entities according to the blueprints to complete the operational scenario.Type: GrantFiled: December 3, 2019Date of Patent: August 23, 2022Assignee: salesforce.com, inc.Inventor: Mark F. Wilding
-
Patent number: 11397620Abstract: A method to deploy a plurality of event-driven application components of an event-driven application in a distributed computing environment is described. The method includes automatically analyzing application source code of the event-driven application, using one or more processors, to identify relationships between the plurality of event-driven application components. Thereafter, a set of rules are applied to, based on the automatic analysis, generate assignment data recording assignments of event-driven application components to a plurality of computational nodes in the distributed computing environment. The set of rules is also applied to determine component requirements for each of the plurality of event-driven application components required to support execution at an assigned computational node in the distributed computing environment.Type: GrantFiled: May 28, 2021Date of Patent: July 26, 2022Assignee: VANTIQ, INC.Inventors: Paul Butterworth, Evan Zhang, Steve Langley
-
Patent number: 11397612Abstract: Embodiments may relate to an electronic device that includes a processor communicatively coupled with a hardware accelerator. The processor may be configured to identify, based on an indication of a priority level in a task control block (TCB), a location at which the TCB should be inserted in a queue of TCBs. The hardware accelerator may perform jobs related to the queue of TCBs in an order related to the order of TCBs within the queue. Other embodiments may be described or claimed.Type: GrantFiled: October 23, 2019Date of Patent: July 26, 2022Assignee: ANALOG DEVICES INTERNATIONAL UNLIMITED COMPANYInventors: Abhijit Giri, Rajib Sarkar
-
Patent number: 11392418Abstract: A computer system may initialize one or more workloads. The computer system may operate in a boost mode and a regular mode. The boost mode may include an adjustment of a pacing setting and an adjustment of group availability targets for executing the one or more workloads. The computer system may identify that the boost mode is enabled during a system start of the computer system. The computer system may identify that the pacing setting is operating in the regular mode. The computer system may dynamically increase the pacing setting. The increase of the pacing setting may enable an increased processor utilization of the computer system by the one or more workloads. The increased processor utilization may generate a concurrent processing of the one or more workloads. The computer system may determine an end of the boost mode and reset the pacing setting.Type: GrantFiled: February 21, 2020Date of Patent: July 19, 2022Assignee: International Business Machines CorporationInventors: Juergen Holtz, Qais Noorshams
-
Patent number: 11392424Abstract: The invention relates to a method for aiding decision-making for the allocation of resources on an HPC-type infrastructure allowing identification of a set of instances that meet a resource requirement. The invention further relates to a computer device (1) comprising a memory module (11) configured to store a resource-class repository, a reservation repository and a keyword repository, a data-processing module (13), a reservation-management module (14), and a communication module (12), said memory module (11) comprising instructions for a program, the execution of which by said data-processing module (13) causes the implementation of the method (100).Type: GrantFiled: December 31, 2019Date of Patent: July 19, 2022Assignee: BULL SASInventors: Patrice Calegari, Sébastien Lacour, Marc Levrier
-
Patent number: 11372649Abstract: Described herein is a system and method of performing flow control for multi-threaded access to contentious resource(s) (e.g., shared memory). A request to enter a critical section of code by a particular thread of a plurality of concurrent threads is received. A determination is made as to whether or not to allow the particular thread to enter the critical section of code based, at least in part, upon a CPU core associated with the particular thread, a state associated with the particular thread, and/or a processing rate in the critical session of code associated with the particular thread. When it is determined to allow the particular thread to enter the critical section of code, the particular thread is allowed to enter the critical section of code.Type: GrantFiled: September 10, 2019Date of Patent: June 28, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Stanislav A Oks, Wonseok Kim
-
Patent number: 11366688Abstract: A do-not-disturb processing method, apparatus and a storage medium are provided. The method includes: receiving a first request message for a sleeping request; generating a query instruction according to first time information carried in the first request message; determining a restricted time period based on the first time information, in response to the query instruction; acquiring at least one first task within the restricted time period, from multiple interaction tasks to be executed; and closing the first task. In embodiments of the present application, efficiency of an interaction process is improved.Type: GrantFiled: November 12, 2019Date of Patent: June 21, 2022Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.Inventors: Chaoyang Chen, Mengmeng Zhang, Wenming Wang, Chen Chen, Guangyao Tang
-
Patent number: 11360820Abstract: Systems and methods are disclosed for scheduling threads on an asymmetric multiprocessing system having multiple core types. Each core type can run at a plurality of selectable voltage and frequency scaling (DVFS) states. Threads from a plurality of processes can be grouped into thread groups. Execution metrics are accumulated for threads of a thread group and fed into a plurality of tunable controllers. A closed loop performance control (CLPC) system determines a control effort for the thread group and maps the control effort to a recommended core type and DVFS state. A closed loop thermal and power management system can limit the control effort determined by the CLPC for a thread group, and limit the power, core type, and DVFS states for the system. Metrics for workloads offloaded to co-processors can be tracked and integrated into metrics for the offloading thread group.Type: GrantFiled: June 2, 2018Date of Patent: June 14, 2022Assignee: Apple Inc.Inventors: John G. Dorsey, Daniel A. Chimene, Andrei Dorofeev, Bryan R. Hinch, Evan M. Hoke, Aditya Venkataraman
-
Patent number: 11354160Abstract: Methods, non-transitory machine readable media, and computing devices that more efficiently and effectively manage storage quota enforcement are disclosed. With this technology, a quota ticket comprising a tally generation number (TGN) and a local allowed usage amount (AUA) are obtained. The local AUA comprises a portion of a global AUA associated with a quota rule. The local AUA is increased following receipt of another portion of the global AUA in a response from a cluster peer, when another TGN in the response matches the TGN and the local AUA is insufficient to execute a received storage operation associated with the quota rule. The local AUA is decreased by an amount corresponding to, and following execution of, the storage operation, when the increased local AUA is sufficient to execute the storage operation.Type: GrantFiled: September 20, 2019Date of Patent: June 7, 2022Assignee: NETAPP, INC.Inventors: Xin Wang, Keith Allen Bare, II, Ying-Hao Wang, Jonathan Westley Moody, Bradley Raymond Lisson, Richard Wight, David Loren Rose, Richard P. Jernigan, IV, Daniel Tennant