Patents Examined by Abu Zar Ghaffari
-
Patent number: 11972299Abstract: A sharable resource of a first user's environment is identified. The sharable resource is configured as sharable in a shared computer environment. A matching resource that is sufficiently similar to the sharable resource is located. The matching resource is used by pre-existing users of the shared computer environment. Agreement from the pre-existing users for the first user to access the matching resource is obtained. The first user is then provided access to the matching resource.Type: GrantFiled: June 22, 2021Date of Patent: April 30, 2024Assignee: International Business Machines CorporationInventors: Jingdong Sun, Roger Mittelstadt, Rafal Konik, Jessica R. Eidem
-
Patent number: 11934869Abstract: This technology is directed to facilitating scalable and secure data collection. In particular, scalability of data collection is enabled in a secure manner by, among other things, abstracting a connector(s) to a pod(s) and/or container(s) that executes separate from other data-collecting functionality. For example, an execution manager can initiate deployment of a collect coordinator on a first pod associated with a first job and deployment of a first connector on a second pod associated with a second job separate from the first job of a container-managed platform. The collect coordinator can provide a data collection task to the first connector deployed on the second pod of the second job. The first connector can then obtain the set of data from the data source and provide the set of data to the collect coordinator for providing the set of data to a remote source.Type: GrantFiled: June 24, 2022Date of Patent: March 19, 2024Assignee: Splunk Inc.Inventors: Denis Vergnes, Zhimin Liang
-
Patent number: 11934872Abstract: A system is provided for monitoring and controlling program flow in an event-triggered system. A program (e.g., application, algorithm, routine, etc.) may be organized into operational units (e.g., nodes executed by one or more processors), each of which tasked with executing one or more respective events (e.g., tasks) within the larger program. At least some of the events of the larger program may be successively executed in a flow, one after another, using triggers sent directly from one node to the next. In addition, the system of the present disclosure may include a manager that may exchange communications with the nodes to monitor or assess a status of the system (e.g., determine when a node has completed an event) or to control or trigger a node to initiate an event.Type: GrantFiled: March 5, 2020Date of Patent: March 19, 2024Assignee: NVIDIA CorporationInventors: Peter Alexander Boonstoppel, Michael Cox, Daniel Perrin
-
Patent number: 11934866Abstract: A method includes obtaining an operator parameter and a processor parameter corresponding to an operator operation, creating N scheduling policies based on the operator parameter and the processor parameter, where the N scheduling policies are classified into M scheduling policy subsets, and each scheduling policy subset includes at least one scheduling policy, filtering the M scheduling policy subsets based on the operator parameter and the processor parameter, to obtain K feasible scheduling policies, where the K feasible scheduling policies are optimal scheduling policies of K feasible scheduling subsets in the M scheduling policy subsets, inputting the operator parameter and the K feasible scheduling policies into a cost model to obtain K operator operation costs, wherein N, M, and K are natural numbers, and determining, based on a target requirement and the K operator operation costs, an optimal scheduling policy used for the operator operation.Type: GrantFiled: January 8, 2021Date of Patent: March 19, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Lin Li, Hao Ding, Kang Yang, Dengcheng Zhang
-
Patent number: 11928516Abstract: A method for managing client resources by receiving a desired load factor representing the number of instructions being executed per second (IOPS) to implement an application on a set of cores of a client device, based on the desired load factor and a latency factor, determining a maximum number of IOPS that can be executed by the cores of the client device before reaching system saturation, determining a pattern of the IOPS being executed on the set of cores based on historical IOPS information for the latency factor, and based on the historical IOPS information, determining to execute the IOPS on a subset of the set of cores.Type: GrantFiled: April 27, 2021Date of Patent: March 12, 2024Assignee: EMC IP Holding Company LLCInventors: Jean-Pierre Bono, Thomas Fridtjof Dahl
-
Patent number: 11915041Abstract: An artificial intelligence (AI) sequencer is provided. The Al sequencer includes a queue manager configured to manage a plurality of queues for maintaining data of AI jobs, wherein an AI job includes processing of one or more AI functions; a scheduler for scheduling execution of data maintained by the plurality of queues; a plurality of job processing units (JPUs), wherein each of the plurality JPUs is configured to at least generate an execution sequence for an AI job; and a plurality of dispatchers connected to a plurality of AI accelerators, wherein each of the plurality of dispatchers is configured to dispatch at least a function of the AI job to an AI accelerator, wherein a function is dispatched to an AI accelerator at an order determined by an execution sequence created for a respective AI job.Type: GrantFiled: September 11, 2020Date of Patent: February 27, 2024Assignee: NEUREALITY LTD.Inventors: Moshe Tanach, Yossi Kasus
-
Patent number: 11915061Abstract: A datacenter includes a datacenter efficiency management system coupled to node devices. For each of the node devices and based on a power consumption associated with that node device and a performance associated with that node device, the datacenter efficiency management system generates a node group ranking that it uses to group subsets of the node devices into respective homogenous node groups, and then deploys a respective workload on at least one node device in each of the homogenous node groups. Based on at least one of a node workload bandwidth, a node power consumption, and a node health of each node device on which a workload was deployed, the datacenter efficiency management system then generates a workload performance efficiency ranking of the node devices that it then uses to migrate at least one workload between the node devices.Type: GrantFiled: October 26, 2021Date of Patent: February 27, 2024Assignee: Dell Products L.P.Inventors: Rishi Mukherjee, Ravishankar Kanakapura Nanjundaswamy, Prasoon Sinha, Raveendra Babu Madala
-
Patent number: 11915037Abstract: In some embodiments a distributed computing system is provided that includes a plurality of different feature modules and a matching engine. The different feature modules each provide different processing for handling parent requests and submitting, to the matching engine, commands for child data transaction requests that are associated with the parent request.Type: GrantFiled: July 30, 2021Date of Patent: February 27, 2024Assignee: NASDAQ, INC.Inventors: Kyle Prem, John Vaccaro, Hemant Thombre
-
Patent number: 11907756Abstract: A graphics processing apparatus that includes at least a memory device and an execution unit coupled to the memory. The memory device can store a command buffer with at least one command that is dependent on completion of at least one other command. The command buffer can include a jump command that causes a jump to a location in the command buffer to identify any unscheduled command. The execution unit is to jump to a location in the command buffer based on execution of the jump command. The execution unit is to perform one or more jumps to one or more locations in the command buffer to attempt to schedule a command with dependency on completion of at least one other command until the command with a dependency on completion of at least one other command is scheduled.Type: GrantFiled: February 20, 2020Date of Patent: February 20, 2024Assignee: Intel CorporationInventors: Bartosz Dunajski, Brandon Fliflet, Michal Mrozek
-
Patent number: 11907105Abstract: A device having a Graphics Processing Unit (GPU) may be configured to selectively run in a normal mode or a timing testing mode. In the timing testing mode the device is configured to disrupt timing of processing that takes place on the GPU while running an application with the GPU and test the application for errors in device hardware component and/or software component synchronization while the device is running in the timing testing mode.Type: GrantFiled: June 21, 2021Date of Patent: February 20, 2024Assignee: SONY INTERACTIVE ENTERTAINMENT LLCInventors: Mark Evan Cerny, David Simpson
-
Patent number: 11907761Abstract: An electronic apparatus including a storage; a processor configured to execute a program including an OS and application program stored in the storage; and a memory configured to load and store the program based on execution of the program, the processor being configured to control the OS to, based on an execution of a process of the application program, create a user stack corresponding to at least one task of the process and store data of the user stack in a predetermined area of the memory; based on a predetermined event being generated in a state which the process is executed, stop change in the data of the predetermined area; discard an area in which data used in a procedure of initiating the process among the stored data is stored, in the predetermined area; and based on a work corresponding to the predetermined event being completed, continuously perform the operation.Type: GrantFiled: September 3, 2019Date of Patent: February 20, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jihun Jung, Jusun Song, Jaehoon Jeong
-
Patent number: 11900156Abstract: A processor includes a compute fabric and a controller. The compute fabric includes an array of compute nodes and interconnects that configurably connect the compute nodes. The controller is configured to configure at least some of the compute nodes and interconnects in the compute fabric to execute specified code instructions, and to send to the compute fabric multiple threads that each executes the specified code instructions. A compute node among the compute nodes is configured to execute a code instruction for a first thread, and to transfer a result of the code instruction within the fabric, for use as an operand by a second thread, different from the first thread.Type: GrantFiled: September 9, 2020Date of Patent: February 13, 2024Assignee: SPEEDATA LTD.Inventors: Yoav Etsion, Dani Voitsechov
-
Patent number: 11886934Abstract: A data processing system comprising a plurality of processing nodes, each comprising at least one memory configured to store an array of data items, wherein each of the plurality of processing nodes is configured to execute compute instructions during a compute phase and following a precompiled synchronisation barrier, enter at least one exchange phase. During the at least one exchange phase, a series of collective operations are carried out. Each processing node is configured to perform a reduce scatter collective in at least one first dimension. Using the results of the reduce scatter collective, each processing node performs an allreduce in a second dimension. The processing nodes then perform an all-gather collective in the at least one first dimension using the results of the allreduce.Type: GrantFiled: July 14, 2020Date of Patent: January 30, 2024Assignee: GRAPHCORE LIMITEDInventors: Lorenzo Cevolani, Fabian Tschopp, Ola Torudbakken
-
Patent number: 11875192Abstract: A system and method capable of efficiently using and operating resources and allowing a cluster satisfying requirements of functions/services provided to terminals to be configured of multi-access edge computing (MEC) servers are provided.Type: GrantFiled: January 16, 2020Date of Patent: January 16, 2024Assignee: Nippon Telegraph and Telephone CorporationInventors: Kotaro Ono, Naoki Higo, Takuma Tsubaki, Yusuke Urata, Ryota Ishibashi, Kenta Kawakami, Takeshi Kuwahara
-
Patent number: 11875152Abstract: A method for generating a thread queue, that includes obtaining, by a user space file system, central processing unit (CPU) socket data, and based on the CPU socket data, generating a plurality of thread handles for a plurality of cores, ordering the plurality of thread handles, in the thread queue, for a first core of the plurality of cores, and saving the thread queue to a region of shared memory.Type: GrantFiled: October 30, 2020Date of Patent: January 16, 2024Assignee: EMC IP HOLDING COMPANY LLCInventor: Adrian Michaud
-
Patent number: 11853781Abstract: A system and method that provides inter-application relevance management for resources being brokered by an application virtualization platform. A described platform includes a memory configured to store a set of relevance rules for applications hosted by the application virtualization platform, wherein each relevance rule specifies a relevance setting between a first application and a second application. Also included is a processor coupled to the memory and configured to broker resources for the application virtualization platform to avoid conflict between the applications.Type: GrantFiled: April 29, 2022Date of Patent: December 26, 2023Assignee: Citrix Systems, Inc.Inventors: Fuping Zhou, Nicky Shi
-
Patent number: 11847489Abstract: Techniques are disclosed relating to a shared control bus for communicating between primary control circuitry and multiple distributed graphics processor units. In some embodiments, a set of multiple processor units includes first and second graphics processors, where the first and second graphics processors are coupled to access graphics data via respective memory interfaces. A shared workload distribution bus is used to transmit control data that specifies graphics work distribution to the multiple graphics processing units. The shared workload distribution bus may be arranged in a chain topology, e.g., to connect the workload distribution circuitry to the first graphics processor and connect the first graphics processor to the second graphics processor such that the workload distribution circuitry communicates with the second graphics processor via the shared workload distribution bus connection to the first graphics processor.Type: GrantFiled: January 26, 2021Date of Patent: December 19, 2023Assignee: Apple Inc.Inventors: Max J. Batley, Jonathan M. Redshaw, Ji Rao, Ali Rabbani Rankouhi
-
Patent number: 11842211Abstract: A user information collection system may include a service provisioning manager configured to manage provisioning of a VDI service provided from a VDI service provider; a charging manager configured to manage charging information according to a use of the VDI service by a user; a policy manager configured to manage a policy for the VDI service; a user manager configured to manage information of the user using the VDI service; a VDI service lifecycle manager configured to manage a lifecycle of the VDI service provided from the VDI service provider; and a multi-tenant connection manager configured to manage connection infrastructure information between at least one of a cloud environment for providing the VDI service and external software as a service (SaaS) and the VDI service provider.Type: GrantFiled: June 13, 2022Date of Patent: December 12, 2023Assignee: PIAMOND CORP.Inventor: Doo Geon Hwang
-
Patent number: 11842217Abstract: Mechanisms for resource isolation allow tenants executing in a multi-tenant software container to be isolated in order to prevent resource starvation by one or more of the tenants. Mechanisms for dependency isolation may be utilized to prevent one tenant executing in a multi-tenant software container from using another tenant in the same container in a manner that requires co-tenancy. Mechanisms for security isolation may be utilized to prevent one tenant in a multi-tenant software container from accessing protected data or functionality of another tenant. Mechanisms for fault isolation may be utilized to prevent tenants in a multi-tenant software container from causing faults or other types of errors that affect other tenants executing in the same software container.Type: GrantFiled: July 29, 2021Date of Patent: December 12, 2023Assignee: Amazon Technologies, Inc.Inventors: Keian Christopher, Kevin Michael Beranek, Christopher Keakini Kaulia, Vijay Ravindra Kulkarni, Samuel Leonard Moniz, Kyle Bradley Peterson, Ajit Ashok Varangaonkar, Jun Xu
-
Patent number: 11836507Abstract: Systems and methods for pre-loading applications with a constrained memory budget and prioritizing the applications based on contextual information are described. An Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the IHS to: collect user context information and system context information, detect a triggering event based upon the user context information and the system context information, identify a memory budget for pre-loading one or more applications, and select the one or more applications with one or more settings configured to maintain a memory usage for the pre-loading below the memory budget.Type: GrantFiled: June 18, 2020Date of Patent: December 5, 2023Assignee: Dell Products L.P.Inventors: Vivek Viswanathan Iyer, Michael S. Gatson