Patents Examined by Hsing Chun Lin
  • Patent number: 12217085
    Abstract: An electronic device includes: a host processor configured to control an operation of the electronic device; accelerators of heterogeneous hardware types configured to exchange data with each other through direct communication; and a control unit configured to convert a command received from the host processor, based on a type of each of the accelerators and transfer a result of the converting to a corresponding accelerator among the accelerators.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: February 4, 2025
    Assignees: Samsung Electronics Co., Ltd., Seoul National University R & DB Foundation
    Inventors: Seung Wook Lee, Jangwoo Kim, Pyeongsu Park
  • Patent number: 12218812
    Abstract: Techniques are described for verifying connectivity in a virtualized computing environment comprising networked computing devices having internal endpoints that are configured with operational connectivity to external endpoints. A connectivity test component is configured to execute as a virtual resource in the virtualized computing environment, execute protocol-aware connectivity tests that enable detection of connectivity errors between the internal endpoints and external endpoints, and instantiate or access network interfaces for establishing connectivity between the internal endpoints and external endpoints. A configuration file defines connectivity types between the internal endpoints and external endpoints. Based on the configuration file, the connectivity test component is executed in the virtualized computing environment. An output is generated by the connectivity test component that is indicative of results of connectivity attempts in accordance with the configuration file.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: February 4, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Philip Joel Davies, Jonathan Phillips, Stephen Christopher Madden, Andrew Chrissie Edmonds, Steven Edward Orbell, Andrew McCurdy, Catherine Gallagher, Jason Dackins
  • Patent number: 12210434
    Abstract: An apparatus and method for closed loop dynamic resource allocation.
    Type: Grant
    Filed: June 27, 2020
    Date of Patent: January 28, 2025
    Assignee: Intel Corporation
    Inventors: Bin Li, Ren Wang, Kshitij Arun Doshi, Francesc Guim Bernat, Yipeng Wang, Ravishankar Iyer, Andrew Herdrich, Tsung-Yuan Tai, Zhu Zhou, Rasika Subramanian
  • Patent number: 12175300
    Abstract: Disclosed embodiments relate to software control of graphics hardware that supports logical slots. In some embodiments, a GPU includes circuitry that implements a plurality of logical slots and a set of graphics processor sub-units that each implement multiple distributed hardware slots. Control circuitry may determine mappings between logical slots and distributed hardware slots for different sets of graphics work. Various mapping aspects may be software-controlled. For example, software may specify one or more of the following: priority information for a set of graphics work, to retain the mapping after completion of the work, a distribution rule, a target group of sub-units, a sub-unit mask, a scheduling policy, to reclaim hardware slots from another logical slot, etc. Software may also query status of the work.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: December 24, 2024
    Assignee: Apple Inc.
    Inventors: Andrew M. Havlir, Steven Fishwick, Melissa L. Velez
  • Patent number: 12175282
    Abstract: The disclosed systems and methods for intelligent heterogeneous computation directed to receiving monitoring data and a set of training data, wherein the monitoring data includes an occupancy rate of a preprocessed data queue and a utilization factor of accelerating devices, generating a resource computation job list in accordance with the monitoring data, forwarding jobs, in the resource computation job list to be executed on a central processing unit (CPU), to a CPU worker queue, forwarding control messages to the CPU worker queue, wherein the control messages are associated with jobs in the resource computation job list to be executed on the accelerating devices, and executing, by the accelerating devices, jobs in the resource computation job list to be executed on the accelerating devices.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: December 24, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Anthony Anthony, Junhan Hu, Xun Xue, Robin Dawn Grosman, Nattavut Sutyanyong
  • Patent number: 12175299
    Abstract: A computing device and method is disclosed. The computing device includes a plurality of processing cores, and a tile scheduler configured to update a cost matrix of each of the plurality of processing cores based on meta information of each of first tiles previously allocated to the plurality of processing cores and meta information of each of second tiles, and allocate the second tiles with respect to the plurality of processing cores using the updated cost matrix of each of the plurality of processing cores.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: December 24, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae-Eon Jo, Hyung-Dal Kwon, Hanmin Park, Jaehyeong Sim, Seung Wook Lee
  • Patent number: 12159163
    Abstract: Embodiments as disclosed herein provide computing systems and methods that effectively serve to isolate processes in a computing environment. The isolation of such processes may serve additionally to substantially increase the observability of such processes, allowing a granular insight into data associated with those processes and the performing of individual tasks.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: December 3, 2024
    Assignee: Q2 Software, Inc.
    Inventors: Adam David Blue, Theodor Getu Berhane, Thomas Coyne
  • Patent number: 12153544
    Abstract: For a given file type, an optimal number of threads to use to copy files of each of a number of different discrete file sizes is determined, using a temporal difference learning, reinforcement learning approach in which file copy time is used as feedback reward reinforcement. A continuous function corresponding to the given file type and outputting the number of threads to use to copy files having this given file type and that are of any input file size is fitted onto the optimal numbers of threads determined for the discrete file sizes.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: November 26, 2024
    Assignee: Micro Focus Software Inc.
    Inventor: Arshad Javeed
  • Patent number: 12147844
    Abstract: In some aspects, a non-transitory computer readable storage medium includes instructions stored thereon that, when executed by a processor, cause the processor to detect that system software is proceeding to swap memory content of a virtual machine (VM) from memory to storage, wherein the memory is allocated to the VM; buffer the memory content; and perform alternative memory reclamation of the memory.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: November 19, 2024
    Assignee: Nutanix, Inc.
    Inventors: Carl Alan Waldspurger, Florian Anselm Johannes Schmidt, Jonathan James Davies
  • Patent number: 12131200
    Abstract: Processing may be performed in accordance with a policy to assign roles of winner and loser between two nodes. The roles may be used in connection with deadlock resolution processing. A deadlock or potential deadlock may be detected between the two nodes performing processing for two transactions In response to detecting the deadlock or potential deadlock, using a current state may be used to determine whether to perform the deadlock resolution processing to resolve the deadlock or potential deadlock. The current state may indicate whether assignment of the winner and loser roles between the two nodes is in progress. Responsive to the current state indicating that processing is not in progress to assign roles of winner and loser between the two nodes, the current state may be used perform deadlock resolution processing to resolve the deadlock or potential deadlock. The current state may denote which node is the current winner.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: October 29, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: Vladimir Shveidel, Amitai Alkalay, Bar David
  • Patent number: 12131161
    Abstract: The present disclosure relates to a method for multi-core communication, an electronic device and a storage medium. The method includes controlling a plurality of cores to run; establishing a communication connection between a publishing core and a receiving core in the plurality of cores based on a communication layer; performing, by the publishing core, an operation on a topic message through calling a preset interface of the communication layer via a publish-subscribe layer; and accessing the topic message in response to the receiving core calling a preset interface of the publish-subscribe layer.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: October 29, 2024
    Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventor: Xiang Xiao
  • Patent number: 12106142
    Abstract: Representative apparatus, method, and system embodiments are disclosed for a self-scheduling processor which also provides additional functionality. Representative embodiments include a self-scheduling processor, comprising: a processor core adapted to execute a received instruction; and a core control circuit adapted to automatically schedule an instruction for execution by the processor core in response to a received work descriptor data packet. In another embodiment, the core control circuit is also adapted to schedule a fiber create instruction for execution by the processor core, to reserve a predetermined amount of memory space in a thread control memory to store return arguments, and to generate one or more work descriptor data packets to another processor or hybrid threading fabric circuit for execution of a corresponding plurality of execution threads. Event processing, data path management, system calls, memory requests, and other new instructions are also disclosed.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: October 1, 2024
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 12073252
    Abstract: Allocation of computational resource to requested tasks is achieved by running a scheduling operation across a plurality of schedulers, each in communication with a subset of network entities, the schedulers establishing a virtual bus. In certain embodiments, the scheduling operation is able to run continuously, allocating newly arriving task requests as resources become available.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: August 27, 2024
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Ziming Zhu
  • Patent number: 12061931
    Abstract: Performance of a computing device can be optimized in a mixed workload environment. A management service can be configured to capture telemetry data from web applications or containerized applications and use such telemetry data to detect a scenario. Based on the detected scenario, the management service can select optimized performance settings and cause the optimized performance settings to be applied within the browser or container in which the application is deployed. Machine learning techniques can be employed to detect and define optimized performance settings for a particular scenario.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: August 13, 2024
    Assignee: Dell Products L.P.
    Inventors: Michael S. Gatson, Vivek Viswanathan Iyer
  • Patent number: 12056539
    Abstract: A data processing system comprising a plurality of processing nodes that are arranged to update a model in a parallel manner Each of the processing nodes starts with a different set of updates to model parameters. Each of the processing nodes is configured to perform one or more reduce-scatter collectives so as to exchange and reduce the updates. Having done so, each processing node is configured to apply the reduced set of updates to obtain an updated set of model parameters. The processing nodes then exchange the updated model parameters using an all-gather so that each processing node ends up with the same model parameters at the end of the process.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: August 6, 2024
    Assignee: GRAPHCORE LIMITED
    Inventors: Ola Torudbakken, Lorenzo Cevolani
  • Patent number: 12050938
    Abstract: Systems, methods, and machine-readable media for monitoring a storage system and correcting demand imbalances among nodes in a cluster are disclosed. A performance manager for the storage system may detect performance imbalances that occur over a period of time. When operating below an optimal performance capacity, the manager may cause a volume to be moved from a node with a high load to a node with a lower load to achieve a preventive result. When operating at or near optimal performance capacity, the manager may cause a QOS limit to be imposed to prevent the workload from exceeding the performance capacity, to achieve a proactive result. When operating abnormally, the manager may cause a QOS limit to be imposed to throttle the workload to bring the node back within the optimal performance capacity of the node, to achieve a reactive result. These actions may be performed independently, or in cooperation.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: July 30, 2024
    Assignee: NETAPP, INC.
    Inventors: Abhishek Hiregoudar, Siu Wu, Alma Dimnaku
  • Patent number: 12050933
    Abstract: Configuration and dynamic profiling of storage class memory (SCM) devices is provided. Information is retrieved that includes historical SCM device configurations, historical SCM device utilization, functional and non-functional properties of a plurality of SCM devices on a host node, current real time utilization of the plurality of SCM devices by an application workload of a customer running on the host node, and relationships between the plurality of SCM devices, needs of the customer, and resource capabilities and real time resource utilization on the host node. A configuration of each respective SCM device is determined based on retrieved information and an artificial intelligence-predicted SCM device future utilization trajectory of the customer. Each respective SCM device is dynamically configured with a set of SCM device partitions according to a corresponding SCM device profile based on the determined configuration of each respective SCM device of the plurality of SCM devices.
    Type: Grant
    Filed: July 15, 2021
    Date of Patent: July 30, 2024
    Assignee: Kyndryl, Inc.
    Inventors: Seng Chai Gan, Shikhar Kwatra, Iranna Dharmaraya Ankad, Anil Bindu Lingambudi, Komminist Weldemariam
  • Patent number: 12045655
    Abstract: Consumer threads can assist in performing progressive chunking for a data queue. For example, a consumer thread can determine a current-chunk identifier indicating a current memory chunk of an unbounded queue, where the current memory chunk is associated with a producer thread that is different from the consumer thread. The consumer thread can determine a target-chunk identifier indicating a target memory chunk to which the producer thread is to write a data item. In response to determining that the target-chunk identifier is greater than the current-chunk identifier, the consumer thread can append a new memory chunk to the unbounded queue for use as the target memory chunk by the producer thread.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: July 23, 2024
    Assignee: RED HAT, INC.
    Inventors: Daniele Zonca, Francesco Nigro
  • Patent number: 11880762
    Abstract: A computer-implemented method, a computer program product, and a computer processing system are provided for selecting from among multiple Graphics Processing Unit (GPU) execution modes for a Neural Network (NN) having a size greater than a threshold size. The multiple GPU execution modes include a normal memory mode, an Out-of-Core (OoC) execution mode, and a Unified Memory (UM) mode. The method includes starting an execution on the NN with the UM mode and measuring the memory usage for each of layers of the NN. The method further includes selecting an execution mode based on the memory usage of all of the layers.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: January 23, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Yasushi Negishi, Haruki Imai, Taro Sekiyama, Tung D. Le, Kiyokuni Kawachiya
  • Patent number: 11836534
    Abstract: One or more processors receive resource type and capability information and activity information of workloads of a domain. A first model is generated and trained to map the resource information to the activity information of domain workloads. The activity information is decomposed into a set of activity core elements (ACEs). The one or more processors generate a second model, wherein the second model is trained to predict a set of resource types and resource capabilities of the respective resource types, based on an input of the first set of ACEs decomposed from the activity information of the workloads of the domain. The one or more processors receive a second set of ACEs that are decomposed from activities associated with an unprecedented workload, and the one or more processors generate a predicted set of resources to perform the second set of ACEs.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: December 5, 2023
    Assignee: International Business Machines Corporation
    Inventors: Michal Paluch, William Carbone, Erik Rueger, Nicolo′ Sgobba