Patents Examined by Abu Zar Ghaffari
-
Patent number: 11693706Abstract: A scheduling algorithm for scheduling training of deep neural network (DNN) weights on processing units identifies a next job to provisionally assign a processing unit (PU) based on a doubling heuristic. The doubling heuristic makes use of an estimated number of training sets needed to complete training of weights for a given job and/or a training speed function which indicates how fast the weights are converging. The scheduling algorithm solves a problem of efficiently assigning PUs when multiple DNN weight data structures must be trained efficiently. In some embodiments, the training of the weights uses a ring-based message passing architecture. In some embodiments, performance using a nested loop approach or nested loop fashion is provided. In inner iterations of the nested loop, PUs are scheduled and jobs are launched or re-started. In outer iterations of the nested loop, jobs are stopped, parameters are updated and the inner iteration is re-entered.Type: GrantFiled: November 21, 2019Date of Patent: July 4, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Timothy Capes, Iqbal Mohomed, Vishal Raheja, Mete Kemertas
-
Patent number: 11693693Abstract: This application provides a method for managing a resource in a computer system and a terminal device. The method includes: obtaining data, where the data includes application sequence feature data related to a current foreground application, and the data further includes at least one of the following real-time data: a system time of the computer system, current status data of the computer system, and current location data of the computer system; selecting, from a plurality of machine learning models based on at least one of the real-time data, a target machine learning model that matches the real-time data; inputting the obtained data into the target machine learning model to rank importance of a plurality of applications installed in the computer system; and performing resource management based on a result of the importance ranking.Type: GrantFiled: April 10, 2020Date of Patent: July 4, 2023Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Qiulin Chen, Hanbing Chen, Zhi Kang
-
Patent number: 11693680Abstract: A computer system (10) for providing virtual computers includes a pool facility (38) for storing a pool (40) of suspended virtual computers (42) based on at least one virtual computer template (44). A provision manager (32) provides a series (46) of virtual computers (18) as a result of a series (50) of system logon requests by a user (54). The provision manager (32) includes an update facility (100), a resume facility (102) and a customization facility (104). The update facility (104) is provided for updating one or each at least one virtual computer template (44). The resume facility (102) is provided for resuming virtual computers from the pool (40) of suspended virtual computers (42) provided by the pool facility (38). The customization facility (104) is provided for customizing virtual computers after being resumed from the pool (40) to provide active virtual computers.Type: GrantFiled: July 8, 2019Date of Patent: July 4, 2023Assignee: BANKVAULT PTY LTDInventors: Graeme Speak, Neil Richardson, Chris Hoy Poy
-
Patent number: 11693708Abstract: In various embodiments, an isolation application determines processor assignment(s) based on a performance cost estimate. The performance cost estimate is associated with an estimated level of cache interference arising from executing a set of workloads on a set of processors. Subsequently, the isolation application configures at least one processor included in the set of processors to execute at least a portion of a first workload that is included in the set of workloads based on the processor assignment(s). Advantageously, because the isolation application generates the processor assignment(s) based on the performance cost estimate, the isolation application can reduce interference in a non-uniform memory access (NUMA) microprocessor instance.Type: GrantFiled: April 24, 2019Date of Patent: July 4, 2023Assignee: NETFLIX, INC.Inventors: Benoit Rostykus, Gabriel Hartmann
-
Patent number: 11687363Abstract: In one embodiment, a processing device is coupled to memory components to monitor host read operations and host write operations from a host device coupled to the plurality of memory components. The processing device schedules, using a variable size internal command queue, a predetermined proportion of back-end processing device read and write operations as internal management traffic proportional to a number of the host read operations and a number of the host write operations. The processing device then executes a subset of the host read operations and the host write operations. Following execution of the subset of the host read operations and the host write operations, the processing device executes an internal management traffic operation based on the predetermined proportion.Type: GrantFiled: April 22, 2020Date of Patent: June 27, 2023Assignee: Micron Technology, Inc.Inventors: Fangfang Zhu, Ying Yu Tai, Ning Chen, Jiangli Zhu, Wei Wang
-
Patent number: 11681564Abstract: A heterogeneous computing-based task processing method, includes: breaking down an artificial intelligent analysis task into one stage or multiple stages of sub-tasks, and completing, by one or more analysis function unit services corresponding to the one stage or multiple stages of sub-tasks, the artificial intelligent analysis task by means of a hierarchical data flow, wherein different stages of sub-tasks have different types, one type of sub-tasks corresponds to one analysis function unit service, and each analysis function unit service uniformly schedules a plurality of heterogeneous units to execute a corresponding sub-task. The disclosure also provides a heterogeneous computing-based software and hardware framework system and a heterogeneous computing-based task processing device.Type: GrantFiled: November 13, 2019Date of Patent: June 20, 2023Assignee: ZTE CORPORATIONInventors: Fang Zhu, Xiu Li
-
Patent number: 11681602Abstract: A performance analysis system includes a picker module and a calculation circuit. The picker module is placed in the processing device to capture a plurality of pieces of time information of a unit circuit of each of a plurality of tasks in the processing device during total execution time of processing the plurality of tasks. The calculation circuit performs an interval analysis operation on the time information. The interval analysis operation includes: calculating an overlap period between a current task and a previous task; and counting time occupied by the unit circuit during the total execution time of processing the tasks by the processing device according to a relation between the current time interval of the current task corresponding to the unit circuit and the overlap period.Type: GrantFiled: June 9, 2020Date of Patent: June 20, 2023Assignee: Shanghai Zhaoxin Semiconductor Co., Ltd.Inventors: Lin Li, Xiaoyang Li, Zhiqiang Hui, Zheng Wang, Zongpu Qi
-
Patent number: 11669358Abstract: Virtual Network Functions (VNF) applies automation and virtualization techniques to move current network functions from dedicated hardware to general purpose hardware of an Information Technology (IT) infrastructure. A VNF may include one or more Virtual Machines (VM) and virtual networks which may implement the function of a network. Systems and methods provide a processing unit, a computation module and an allocation module for VNF allocation. The computation module is configured to determine an extinction factor corresponding to a datacenter unit based on a state of the datacenter and a VNF catalogue including a plurality of VNFs. The computation module is also configured to develop an allocation model based on the determined extinction factor. The allocation module is configured to allocate a first VNF from the plurality of VNFs in the datacenter based on the allocation model.Type: GrantFiled: August 5, 2020Date of Patent: June 6, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: Mario Garrido Galvez, Ignacio Aldama Perez, Jose Maria Alvarez Fernandez, David Severiano Herbada, Jorge Menendez Lopez, Javier Garcia Lopez, Ruben Sevilla Giron
-
Patent number: 11669368Abstract: In an edge computing system deployment, a system includes memory and processing circuitry coupled to the memory. The processing circuitry is configured to obtain a workflow execution plan that includes workload metadata defining a plurality of workloads associated with a plurality of edge service instances executing respectively on one or more edge computing devices. The workload metadata is translated to obtain workload configuration information for the plurality of workloads. The workload configuration information identifies a plurality of memory access configurations and service authorizations identifying at least one edge service instance authorized to access one or more of the memory access configurations. The memory is partitioned into a plurality of shared memory regions using the memory access configurations. A memory access request for accessing one of the shared memory regions is processed based on the service authorizations.Type: GrantFiled: December 20, 2019Date of Patent: June 6, 2023Assignee: Intel CorporationInventors: Kshitij Arun Doshi, Ned M. Smith, Francesc Guim Bernat, Timothy Verrall
-
Patent number: 11663025Abstract: Described is a computer system for providing virtual computers. The computer system includes a pool facility for storing a pool of suspended virtual computers based on at least one virtual computer template. The computer system includes a provision manager for ensuring that a series of system logon requests results in the user being provided with a series of virtual computers that reflect applied updates. The provision manager includes an update facility, a resume facility and a customization facility. The update facility regularly applies updates to the virtual computer template to ensure that each virtual computer reflects the updates. The resume facility resumes suspended virtual computers provided by the pool facility. The customization facility customizes each virtual computer for the particular user after the virtual computer is resumed from the pool of suspended virtual computers, the customization including providing the resumed virtual computer with a user data layer.Type: GrantFiled: May 23, 2014Date of Patent: May 30, 2023Assignee: BANKVAULT PTY LTDInventors: Graeme Speak, Neil Richardson, Chris Hoy Poy
-
Patent number: 11663034Abstract: A data processing apparatus has processing circuitry with transactional memory support circuitry to support execution of a transaction using transactional memory. In response to an exception mask updating instruction which updates exception mask information to enable at least one subset of exceptions which was disabled at the start of processing of a transaction, the processing circuitry permits un-aborted processing of one or more subsequent instruction of the transaction that follow the exception mask update instruction.Type: GrantFiled: August 21, 2018Date of Patent: May 30, 2023Assignee: Arm LimitedInventors: Matthew James Horsnell, Grigorios Magklis, Richard Roy Grisenthwaite, Stephan Diestelhorst
-
Patent number: 11663042Abstract: A system generates electronic alerts through predictive analysis of resource conversions. The system may continuously monitor executed resource transfers to generate historical resource transfer data. Based on the historical resource transfer data, the system may generate a predicted outcome of executing transfers of resources in a first format compared to transfers of resources in a second format. The predicted outcome may then be implemented by the system to select a resource format for transfers occurring in the future and/or at specified intervals.Type: GrantFiled: December 30, 2021Date of Patent: May 30, 2023Assignee: BANK OF AMERICA CORPORATIONInventors: Lee Ann Proud, Martha Sain McClellan, Joseph Benjamin Castinado, Kathleen Hanko Trombley
-
Patent number: 11625252Abstract: Described embodiments provide systems and methods selecting one or more applications to launch based in part on features of a file. A device can receive a file from a user of a client device. The device can select, according to a file type of the file, an algorithm to identify one or more features of the file. The device can determine, according to the one or more features, one or more applications to execute the file on the client device. The device can provide, to the user through the client device, a listing of the one or more applications to execute the file.Type: GrantFiled: June 10, 2020Date of Patent: April 11, 2023Assignee: Citrix Systems, Inc.Inventors: Zongpeng Qiao, Xiaolu Chu, Xiao Zhang
-
Patent number: 11620163Abstract: A method controls resource allocation in a data center. The method comprises identifying a first computational task to be transferred from a first set of one or more servers. The method also comprises identifying a second set of one or more servers to which the first computational task may be transferred, by using the respective current computational load of the second set of servers to determine that the second set of servers has sufficient available computational resources to implement the computational task and identifying network links for transferring the computational task and using the respective transmission loads of the identified links to determine that there is sufficient network capacity to transfer the computational task from the first set of servers to the second set of servers. The method then comprises transferring the computational task from the first set of servers to the second set of servers.Type: GrantFiled: October 5, 2016Date of Patent: April 4, 2023Assignee: Telefonaktiebolaget LM Ericsson (publ)Inventors: Paola Iovanna, Francesco Giurlanda, Teresa Pepe
-
Patent number: 11599391Abstract: A method of requesting data items from storage. The method comprising allocating each of a plurality of memory controllers a unique identifier and assigning memory transaction requests for accessing data items to a memory controller according to the unique identifiers. The data items are spatially local to one another in storage. The data items are requested from the storage via the memory controllers according to the memory transaction requests and then buffered if the data items are received out of order relative to an order in which the data items are requested.Type: GrantFiled: October 3, 2019Date of Patent: March 7, 2023Assignee: Arm LimitedInventor: Graeme Leslie Ingram
-
Patent number: 11579927Abstract: An electronic device including an application processor and a communication processor. The communication processor including a resource memory, the communication processor configured to monitor an occupancy rate of the resource memory, determine whether the electronic device is in an idle state, forcibly release a network connection, clear the resource memory, and reconnect the network connection.Type: GrantFiled: August 11, 2020Date of Patent: February 14, 2023Assignee: Samsung Electronics Co., Ltd.Inventor: Jaehong Park
-
Patent number: 11556395Abstract: Data race detection in multi-threaded programs can be achieved by leveraging per-thread memory protection technology in conjunction with a custom dynamic memory allocator to protect shared memory objects with unique memory protection keys, allowing data races to be turned into inter-thread memory access violations. Threads may acquire or release the keys used for accessing protected memory objects at the entry and exit points of critical sections within the program. An attempt by a thread to access a protected memory object within a critical section without the associated key triggers a protection fault, which may be indicative of a data race.Type: GrantFiled: January 24, 2020Date of Patent: January 17, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Sangho Lee, Adil Ahmad
-
Patent number: 11550632Abstract: A mechanism is described for facilitating efficient communication and data processing across clusters of computing machines in a heterogenous computing environment. A method includes detecting a request for processing of data using a programming framework and a programming model; facilitating interfacing between the programming framework and the programming model, wherein interfacing includes merging the programming model into the programming framework, wherein interfacing further includes integrating the programming framework with a distribution framework hosting the programming model; and calling on the distribution framework to schedule processing of a plurality of jobs based on the request.Type: GrantFiled: December 24, 2015Date of Patent: January 10, 2023Assignee: Intel CorporationInventors: Yuanyuan Li, Yong Jiang, Linghyi Kong
-
Patent number: 11550638Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reducing latency in presenting content. In one aspect, a system includes a native application that presents an interactive item and a latency reduction engine. The latency reduction engine detects interaction with the interactive item that links to a first electronic resource that is different from the native application and provided by a first network domain and in response to the detecting, reduces latency in presenting the first electronic resource, including executing a first processing thread and a second processing thread in parallel. The first processing thread requests a second electronic resource from a second network domain and loads the second electronic resource and, in response to the loading, stores a browser cookie for the second network domain. The second processing thread requests the first electronic resource and presents the first electronic resource.Type: GrantFiled: March 31, 2020Date of Patent: January 10, 2023Assignee: Google LLCInventors: Tuna Toksoz, Thomas Graham Price
-
Patent number: 11531565Abstract: Techniques are described for a compiler scheduling algorithm/routine that utilizes backtracking to generate an execution schedule for a neural network computation graph using a neural network compiler intermediate representation of hardware synchronization counters. The hardware synchronization counters may be referred to as physical barriers, hardware (HW) barriers, or barriers and their intermediate representations may be referred to as barrier tasks or barriers. Backtracking is utilized to prevent an available number of hardware barriers from being exceeded during performance of an execution schedule. An execution schedule may be a computation workload schedule for neural network inference applications. An execution schedule may also be a first in first out (FIFO) schedule.Type: GrantFiled: May 8, 2020Date of Patent: December 20, 2022Assignee: Intel CorporationInventor: John Brady