Patents Examined by Abu Zar Ghaffari
-
Patent number: 11836507Abstract: Systems and methods for pre-loading applications with a constrained memory budget and prioritizing the applications based on contextual information are described. An Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the IHS to: collect user context information and system context information, detect a triggering event based upon the user context information and the system context information, identify a memory budget for pre-loading one or more applications, and select the one or more applications with one or more settings configured to maintain a memory usage for the pre-loading below the memory budget.Type: GrantFiled: June 18, 2020Date of Patent: December 5, 2023Assignee: Dell Products L.P.Inventors: Vivek Viswanathan Iyer, Michael S. Gatson
-
Patent number: 11836526Abstract: A system receives a time series of data values from instrumented software executing on an external system. Each data value corresponds to a metric of the external system. The system stores a level value representing a current estimate of the time series and a trend value representing a trend in the time series. The level and trend values are based on data in a window having a trailing value. In response to receiving a most recent value, the system updates the level value and the trend value to add an influence of the most recent value and remove an influence of the trailing value. The system forecasts based on the updated level and trend values, and in response to determining that the forecast indicates the potential resource shortage event, takes action.Type: GrantFiled: May 18, 2021Date of Patent: December 5, 2023Assignee: Splunk Inc.Inventor: Joseph Ari Ross
-
Patent number: 11836525Abstract: A system includes a memory, a processor in communication with the memory, and an operating system (“OS”) executing on the processor. The processor belongs to a processor socket. The OS is configured to pin a workload of a plurality of workloads to the processor belonging to the processor socket. Each respective processor belonging to the processor socket shares a common last-level cache (“LLC”). The OS is also configured to measure an LLC occupancy for the workload, reserve the LLC occupancy for the workload thereby isolating the workload from other respective workloads of the plurality of workloads sharing the processor socket, and maintain isolation by monitoring the LLC occupancy for the workload.Type: GrantFiled: December 17, 2020Date of Patent: December 5, 2023Assignee: Red Hat, Inc.Inventors: Orit Wasserman, Marcel Apfelbaum
-
Patent number: 11824784Abstract: Various approaches for implementing platform resource management are described. In an edge computing system deployment, an edge computing device includes processing circuitry coupled to a memory. The processing circuitry is configured to obtain, from an orchestration provider, an SLO (or SLA) that defines usage of an accessible feature of the edge computing device by a container executing on a virtual machine within the edge computing system. A computation model is retrieved based on at least one key performance indicator (KPI) specified in the SLO. The defined usage of the accessible feature is mapped to a plurality of feature controls using the retrieved computation model. The plurality of feature controls is associated with platform resources of the edge computing device that are pre-allocated to the container. The usage of the platform resources allocated to the container is monitored using the plurality of feature controls.Type: GrantFiled: December 20, 2019Date of Patent: November 21, 2023Assignee: Intel CorporationInventors: Brian Andrew Keating, Marcin Spoczynski, Lokpraveen Mosur, Kshitij Arun Doshi, Francesc Guim Bernat
-
Patent number: 11816507Abstract: Techniques for implementing an infrastructure orchestration service are described. A configuration file for a deployment to a first execution target and a second execution target can be received. A first safety plan can be generated for the first execution target that comprises a first list of resources and operations associated with deployment at the first execution target. Approval of the first safety plan can be received. A second safety plan can be generated for the second execution target that comprises a second list of resources and operations associated with deployment at the second execution target. A determination can be made whether the second safety plan is a subset of the first safety plan. If the determination is that the second safety plan is a subset of the first safety plan, the second safety plan can automatically be approved and transmitted to the second execution target for deployment.Type: GrantFiled: September 21, 2020Date of Patent: November 14, 2023Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Eric Tyler Barsalou, Nathaniel Martin Glass
-
Patent number: 11809908Abstract: A data processing system comprises a pool of reconfigurable data flow resources and a runtime processor. The pool of reconfigurable data flow resources includes arrays of physical configurable units and memory. The runtime processor includes logic to receive a plurality of configuration files for user applications. The configuration files include configurations of virtual data flow resources required to execute the user applications. The runtime processor also includes logic to allocate physical configurable units and memory in the pool of reconfigurable data flow resources to the virtual data flow resources and load the configuration files to the allocated physical configurable units. The runtime processor further includes logic to execute the user applications using the allocated physical configurable units and memory.Type: GrantFiled: July 7, 2020Date of Patent: November 7, 2023Assignee: SambaNova Systems, Inc.Inventors: Ravinder Kumar, Conrad Alexander Turlik, Arnav Goel, Qi Zheng, Raghunath Shenbagam, Anand Misra, Ananda Reddy Vayyala
-
Patent number: 11803417Abstract: The system uses the non-repudiatory persistence of blockchain technology to store all task statuses and results across the distributed computer network in an immutable blockchain database. Coupled with the resiliency of the stored data, the system may determine a sequence of processing tasks for a given processing request and use the sequence to detect and/or predict failures. Accordingly, in the event of a detected system failure, the system may recover the results prior to the failure, minimizing disruptions to processing the request and improving hardware resiliency.Type: GrantFiled: January 7, 2022Date of Patent: October 31, 2023Assignee: THE BANK OF NEW YORK MELLONInventors: Sanjay Kumar Stribady, Saket Sharma, Gursel Taskale
-
Patent number: 11803391Abstract: Devices and techniques for threads in a programmable atomic unit to self-schedule are described herein. When it is determined that an instruction will not complete within a threshold prior to insertion into a pipeline of the processor, a thread identifier (ID) can be passed with the instruction. Here, the thread ID corresponds to a thread of the instruction. When a response to completion of the instruction is received that includes the thread ID, the thread is rescheduled using the thread ID in the response.Type: GrantFiled: October 20, 2020Date of Patent: October 31, 2023Assignee: Micron Technology, Inc.Inventor: Tony Brewer
-
Patent number: 11797280Abstract: Techniques to partition a neural network model for serial execution on multiple processing integrated circuit devices are described. An initial partitioning of the model into multiple partitions each corresponding to a processing integrated circuit device is performed. For each partition, an execution latency is calculated by aggregating compute clock cycles to perform computations in the partition, and weight loading clock cycles determined based on a number of weights used in the partition. The amount of data being outputted from the partition is also determined. The partitions can be adjusted by moving computations from a source partition to a target partition to change execution latencies of the partitions and the amount of data being transferred between partitions.Type: GrantFiled: June 30, 2021Date of Patent: October 24, 2023Assignee: Amazon Technologies, Inc.Inventors: Parivallal Kannan, Fabio Nonato de Paula, Preston Pengra Briggs
-
Patent number: 11797344Abstract: A system includes a memory for storing a plurality of memory chunks and a processor for executing a plurality of producer threads. A producer thread increases a producer sequence and determines (i) a first chunk identifier associated with the producer sequence of an identified memory chunk and (ii) a position from the producer sequence to offer an item. The producer thread determines a second chunk identifier of a last created/appended memory chunk and determines whether the second chunk identifier is valid (e.g., matches the first chunk identifier). The producer thread reads a current memory chunk and determines whether a third chunk identifier associated with the current memory chunk is valid (e.g., matches the first chunk identifier). The producer thread writes the item into the identified memory chunk at the position.Type: GrantFiled: October 30, 2020Date of Patent: October 24, 2023Assignee: Red Hat, Inc.Inventor: Francesco Nigro
-
Patent number: 11782757Abstract: A machine learning network is implemented by executing a computer program of instructions on a machine learning accelerator (MLA) comprising a plurality of interconnected storage elements (SEs) and processing elements (PEs). The instructions are partitioned into blocks, which are retrieved from off-chip memory. The block includes a set of deterministic instructions (MLA instructions) to be executed by on-chip storage elements and/or processing elements according to a static schedule from a compiler. The MLA instructions may require data retrieved from off-chip memory by memory access instructions contained in prior blocks. The compiler also schedules the memory access instructions in a manner that avoids contention for access to the off-chip memory. By avoiding contention, the execution time of off-chip memory accesses becomes predictable enough and short enough that the memory access instructions may be scheduled so that they are known to complete before the retrieved data is required.Type: GrantFiled: May 7, 2021Date of Patent: October 10, 2023Assignee: SiMa Technologies, Inc.Inventor: Reed Kotler
-
Patent number: 11775339Abstract: Computerized robotic process automation (RPA) methods and systems that increase the flexibility and lower the cost with which RPA systems may be deployed are disclosed herein. In one embodiment, an RPA system and method avoids the need for preinstalled RPA software on a device employed by a user to create and/or execute software robots to perform RPA. In another embodiment, an RPA system and method provides a capability to execute software robots that may have been encoded in one or more programming languages to execute on an operating system different than that employed by a server of the RPA system.Type: GrantFiled: January 31, 2022Date of Patent: October 3, 2023Assignee: Automation Anywhere, Inc.Inventors: Virinchipuram J. Anand, James Dennis, Abhijit Kakhandiki
-
Patent number: 11775327Abstract: Apparatus and methods are described herein for multiple single level security (MSLS) domains including, but not limited to, a secure kernel hypervisor (SKH). The SKH configures a single multi-tenant cloud to host the MSLS domains. A cloud orchestration system (COS) configures the single multi-tenant cloud to set up a plurality of separate virtual work packages (VWPs) for the MSLS domains. A key management system (KMS) is configured to manage security objects associated with the MSLS domains.Type: GrantFiled: July 10, 2020Date of Patent: October 3, 2023Assignee: SEMPER FORTIS SOLUTIONS, LLCInventors: Gregory B. Pepus, Todd O'Connell
-
Patent number: 11762635Abstract: An artificial intelligence (“AI”) engine is disclosed with AI-engine modules and a plurality of learning agents. The AI-engine modules include instructor, learner, and predictor modules. The learner module is configured to train a plurality of AI models in parallel, and the instructor module is configured to coordinate with a plurality of simulators for respectively training the AI models. The learning agents are configured to process training requests from the instructor on data from the simulators for training the AI models. The learner module is further configured to first train the AI models on a first batch of similar data synchronously pooled in a memory of the learner module with a first processor. The learner module is further configured to subsequently train the AI models on a second, different batch of similar data synchronously pooled in the memory of the learner module with the first processor.Type: GrantFiled: June 14, 2018Date of Patent: September 19, 2023Assignee: Microsoft Technology Licensing, LLCInventor: Matthew Brown
-
Patent number: 11755926Abstract: A method, computer system, and a computer program product for data pipeline prioritization is provided. Embodiments may include receiving, by a cognitive rules engine, one or more data pipelines. Embodiments may then include analyzing, using a computational method of the cognitive rules engine, the one or more data pipelines. Embodiments may lastly include prioritizing the one or more data pipelines based on a result of the computational method of the cognitive rules engine.Type: GrantFiled: February 28, 2019Date of Patent: September 12, 2023Assignee: International Business Machines CorporationInventors: Ritesh Kumar Gupta, Namit Kabra, Likhitha Maddirala, Eric Allen Jacobson, Scott Louis Brokaw, Jo Ramos
-
Patent number: 11748135Abstract: Systems and methods for memory management for virtual machines. An example method may include creating, by a hypervisor running on a host computer system, a virtual device associated with a virtual machine managed by the hypervisor. The virtual device may include a virtual input/output memory management unit (IOMMU). The method may further include appending, by a driver of the virtual device, a plurality of page table entries to a page table of the virtual IOMMU, wherein each page table entry of the plurality of page table entries references unencrypted memory pages used by the virtual machine. Responsive to receiving a memory access request with respect to a memory page, the hypervisor may determine, using the page table of the virtual IOMMU, whether the memory page is encrypted.Type: GrantFiled: July 30, 2020Date of Patent: September 5, 2023Assignee: Red Hat, Inc.Inventor: Michael Tsirkin
-
Patent number: 11734023Abstract: A method includes, in a processor of a user device, deciding to preload a user application, which has one or more User Interface (UI) displays whose state is retained by the processor in a memory of the user device. At least part of the user application is preloaded, and a state of the preloaded user application is restored, in a background mode, to match the retained state of the one or more UI displays.Type: GrantFiled: May 8, 2022Date of Patent: August 22, 2023Assignee: TENSERA NETWORKS LTD.Inventors: Roee Peled, Amit Wix
-
Patent number: 11734076Abstract: A method includes obtaining a graph traversal statement; determining at least two operators contained in the graph traversal statement and an execution order, of the at least two operators, allocating a respective thread to each operator, creating a buffer queue for each two adjacent operators; for each two adjacent operators, executing an operation of a former operator by a thread, writing an executing result of the former operator to the buffer queue; an executing an operator of a latter operator by reading the execution result of the former operator by a thread from the buffer queue.Type: GrantFiled: October 21, 2020Date of Patent: August 22, 2023Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Haiping Zhang, Yang Wang, Xi Chen, Yifei Wang
-
Patent number: 11709799Abstract: Embodiments provide for application-specific provisioning of files or registry keys. As applications are installed or launched, data is recorded by an application virtualization engine, and an index is created linking the recorded data to both the application and the underlying files or registry keys. As applications are requested (e.g., launched, updated, or the like), the application virtualization engine reveals various copies of file or registry keys to the application on demand or in accordance with a policy.Type: GrantFiled: June 9, 2016Date of Patent: July 25, 2023Assignee: VMware, Inc.Inventors: Fei Huang, Daniel James Beveridge
-
Patent number: 11693690Abstract: Disclosed is an instruction for a programmable atomic transaction that is executed as the last instruction and that terminates the executing thread, waits for all outstanding store operations to finish, clears the programmable atomic lock, and sends a completion response back to the issuing process. This guarantees that the programmable atomic lock is cleared when the transaction completes. By coupling thread termination with clearing the lock bit, this guarantees that the thread cannot terminate without clearing the lock.Type: GrantFiled: October 20, 2020Date of Patent: July 4, 2023Assignee: Micron Technology, Inc.Inventor: Tony Brewer