Patents Examined by Sisley N Kim
-
Patent number: 11875185Abstract: A computer-implemented method according to one embodiment includes receiving data associated with a driver performing actions. At least some of the actions trigger events emitted by an event emitter. Information, from the received data, about the performed actions is logged in an action log. An event observer is instructed to log the events emitted by the event emitter that the event observer observes. The observed events are logged in an event log. The information of the action log and information of the event log is compared based on a rule, and a validity of the event emitter is determined based on results of the comparing. A computer program product according to another embodiment includes a computer readable storage medium having program instructions embodied therewith. The program instructions are readable and/or executable by a processor to cause the processor to perform the foregoing method.Type: GrantFiled: May 19, 2021Date of Patent: January 16, 2024Assignee: International Business Machines CorporationInventors: James Collins Davis, Willard Adams Davis
-
Patent number: 11868806Abstract: A simulated annealing-based metaheuristic method for scheduling tasks in the infrastructures that use cloud computing service with multitasking and multi-node structure that can perform the big data analysis.Type: GrantFiled: June 18, 2020Date of Patent: January 9, 2024Inventors: Deniz Dal, Esra Çelik
-
Patent number: 11861405Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: receiving, by a manager node, from a plurality of compute nodes metrics data, the manager node and the plurality of compute nodes defining a first local cluster of a first computing environment, wherein nodes of the compute nodes defining the first local cluster have running thereon container based applications, wherein a first container based application runs on a first compute node of the plurality of compute nodes defining the first local cluster, and wherein a second compute node of the plurality of compute nodes defining the first local cluster runs a second container based application; wherein the manager node has received from an orchestrator availability data specifying a set of compute nodes available for hosting the first application.Type: GrantFiled: April 29, 2020Date of Patent: January 2, 2024Assignee: Kyndryl, Inc.Inventor: Vishal Anand
-
Patent number: 11861395Abstract: A method for managing memory for applications in a computing system includes receiving a selection of a preferred application. During user-controlled operation over the application, the transitions of selected application between foreground and background are monitored. A retention of the application in memory is triggered upon a transition of the application to background during the user operation. Retention of the application includes compressing memory portions of the application. Accordingly, the application is retained within the memory based on said compressed memory portions. A requirement to restore the retained application is sensed based on either a user selection or an automatically generated prediction and the application is restored from the retained state back to the foreground.Type: GrantFiled: February 5, 2021Date of Patent: January 2, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Ganji Manoj Kumar, Jaitirth Anthony Jacob, Rishabh Raj, Vaisakh Punnekkattu Chirayil Sudheesh Babu, Renju Chirakarotu Nair, Hakryoul Kim, Shweta Ratanpura, Tarun Gopalakrishnan, Sriram Shashank, Raju Suresh Dixit, Youngjoo Jung
-
Patent number: 11853808Abstract: A method includes receiving a request to set up a computing cluster comprising at least one node, the request comprising a selection of a node graphical user interface element that represents at least one virtual machine associated with at least one of at least one cloud service provider and at least one on-premise computing device, dynamically generating a configuration file comprising configuration language to set up the computing cluster comprising the at least one node, parsing the configuration file to convert the configuration file into at least one application programming interface (API) request and sending the at least one API request to the least one of the at least one cloud service provider and the at least one on-premise computing device to set up the computing cluster, and receiving real-time deployment information.Type: GrantFiled: January 26, 2023Date of Patent: December 26, 2023Assignee: HARPOON CORP.Inventors: Dominic Holt, Manuel Gauto, Mathew Jackson
-
Patent number: 11853803Abstract: A workload compliance governor system includes a management system coupled to a computing system. A workload compliance governor subsystem in the computing system receives a workload performance request associated with a workload, exchanges hardware compose communications with the management system to compose hardware components for the workload, and receives back an identification of hardware components. The workload compliance governor subsystem then determines that the identified hardware components satisfy hardware compliance requirements for the workload, and configures the identified hardware components in the computing system based on the software compliance requirements for the workload in order to cause those identified hardware components to provide an operating system and at least one application that operate to perform the workload.Type: GrantFiled: October 28, 2022Date of Patent: December 26, 2023Assignee: Dell Products L.P.Inventors: Mukund P. Khatri, Gaurav Chawla, William Price Dawkins, Elie Jreij, Mark Steven Sanders, Walter A. O'Brien, III, Robert W. Hormuth, Jimmy D. Pike
-
Patent number: 11853800Abstract: Apparatuses and methods for providing resources are provided that include receiving power statuses of resources of a system capable of providing the resources; quantifying the power statuses of the resources; calculating an available soft capacity of the system based on the quantified power statuses and a total capacity of the system; and providing an assigning amount of the resources beyond the calculated available soft capacity to one or more users.Type: GrantFiled: November 19, 2018Date of Patent: December 26, 2023Assignee: Alibaba Group Holding LimitedInventors: Youquan Feng, Yijun Lu, Jun Song
-
Patent number: 11847505Abstract: Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for managing a storage system. The method includes: determining, at a first device of the storage system, whether a load of a first accelerator resource of the first device exceeds a load threshold; sending, if it is determined that the load exceeds the load threshold, a job processing request to a second device in a candidate device list to cause the second device to process a target job of the first device using a second accelerator resource of the second device, the candidate device list indicating devices in the storage system that can be used to assist the first device in job processing; receiving, from the second device, latency information related to remote processing latency of processing the target job using the second accelerator resource; and updating the candidate device list based on the latency information. The embodiments of the present disclosure can optimize the system performance.Type: GrantFiled: May 21, 2021Date of Patent: December 19, 2023Assignee: EMC IP HOLDING COMPANY LLCInventors: Tao Chen, Bing Liu, Geng Han, Jian Gao
-
Patent number: 11842209Abstract: Exemplary methods, apparatuses, and systems include a client virtual machine processing a system call for a device driver to instruct a physical device to perform a function and transmitting the system call to an appliance virtual machine to execute the system call. The client virtual machine determines, in response to the system call, that an established connection with the appliance virtual machine has switched from a first protocol to a second protocol, the first and second protocols including a high-performance transmission protocol and Transmission Control Protocol and Internet Protocol (TCP/IP). The client virtual machine transmits the system call to the appliance virtual machine according to the second protocol. For example, the established connection may switch to the second protocol in response to the client virtual machine migrating to the first host device from a second host device.Type: GrantFiled: January 8, 2019Date of Patent: December 12, 2023Assignee: VMware, Inc.Inventors: Lawrence Spracklen, Hari Sivaraman, Vikram Makhija, Rishi Bidarkar
-
Patent number: 11842215Abstract: Techniques described herein can optimize usage of computing resources in a data system. Dynamic throttling can be performed locally on a computing resource in the foreground and autoscaling can be performed in a centralized fashion in the background. Dynamic throttling can lower the load without overshooting while minimizing oscillation and reducing the throttle quickly. Autoscaling may involve scaling in or out the number of computing resources in a cluster as well as scaling up or down the type of computing resources to handle different types of situations.Type: GrantFiled: January 28, 2023Date of Patent: December 12, 2023Assignee: Snowflake Inc.Inventors: Johan Harjono, Daniel Geoffrey Karp, Kunal Prafulla Nabar, Rares Radut, Arthur Kelvin Shi
-
Patent number: 11834071Abstract: Methods, system, non-transitory media, and devices for supporting safety compliant computing in a heterogeneous computing system, such as a vehicle heterogeneous computing system are disclosed. Various aspects include methods enabling a vehicle, such as an autonomous vehicle, a semi-autonomous vehicle, etc., to achieve algorithm safety for various algorithms on a heterogeneous compute platform with various safety levels.Type: GrantFiled: August 4, 2020Date of Patent: December 5, 2023Assignee: QUALCOMM IncorporatedInventors: Ahmed Kamel Sadek, Avdhut Joshi, Gautam Sachdeva, Yoga Y Nadaraajan
-
Patent number: 11816495Abstract: Embodiments of the present invention include a method for running a virtual manager scheduler for scheduling activities for virtual machines. The method may include: defining a schedule for one or more activities to be executed for a virtual machine; applying an adjustment to the schedule in accordance with feedback information received via a virtual machine client aggregating the feedback information from a plurality of virtual machine clients, each being related to a virtual machine, per scheduled activity type; and determining of a group adjustment for a determined group of the virtual machine clients based on a function of the feedback information of the plurality of virtual machine clients.Type: GrantFiled: February 13, 2018Date of Patent: November 14, 2023Assignee: International Business Machines CorporationInventors: Piotr Kania, Wlodzimierz Martowicz, Piotr Padkowski, Marek Peszt
-
Patent number: 11815870Abstract: A method for performing computing procedures with a control unit of a transportation vehicle wherein the control unit is not installed in a fixed position in the transportation vehicle, but is instead a removable design. The control unit performs control tasks for transportation vehicle functions in the transportation vehicle and is used outside the transportation vehicle for vehicle-independent calculations. The control unit in the transportation vehicle uses a computing power and/or memory capacity which is/are not required for the control tasks for vehicle-independent calculations in the transportation vehicle, wherein these vehicle-independent calculations are continued outside the transportation vehicle when the control unit is removed from the transportation vehicle.Type: GrantFiled: October 1, 2019Date of Patent: November 14, 2023Inventors: Mukayil Kilic, Thomas Christian Lesinski
-
Patent number: 11809912Abstract: A system control processor manager for servicing workloads using composed information handling systems instantiated using information handling systems includes persistent storage and a workload manager. The workload manager obtains a workload request for a workload of the workloads; predicts future resource needs for the workload during a future time period; makes a determination that a portion of free resources of the information handling systems are available to meet the future resource needs; reserves the portion of the free resources based on the determination to obtain reserved resources during the future time period; and composes a composed information handling system of the composed information handling systems using the reserved resources during the future time period to service the workload request.Type: GrantFiled: December 9, 2020Date of Patent: November 7, 2023Assignee: DELL PRODUCTS L.P.Inventors: Elie Antoun Jreij, William Price Dawkins, Gaurav Chawla, Mark Steven Sanders, Walter A. O'Brien, III, Mukund P. Khatri, Robert Wayne Hormuth, Yossef Saad, Jimmy Doyle Pike
-
Patent number: 11803424Abstract: Systems described herein may allow for the intelligent configuration of containers onto virtualized resources. Different configurations may be generated based on the simulation of alternate placements of containers onto nodes, where the placement of a particular container onto a particular node may serve as a root for several branches which may themselves simulate the placement of additional containers on the node (in addition to the container(s) indicated in the root). Once a set of configurations are generated, a particular configuration may be selected according to determined selection parameters and/or intelligent selection techniques.Type: GrantFiled: January 3, 2023Date of Patent: October 31, 2023Assignee: Verizon Patent and Licensing Inc.Inventors: Matthew Kapala, Hector A. Garcia Crespo, Sudha Subramaniam, Brian A. Ward, Brent D. Segner
-
Patent number: 11789775Abstract: The visualization of progress of a distributed computational job at multiple points of execution. After a computational job is compiled into multiple vertices, and then those multiple vertices are scheduled on multiple processing nodes in a distributed environment, a processing gathering module gathers processing information regarding processing of multiple vertices of a computational job, and at multiple instances in time in the execution of the computational job. A user interface module graphically presents a representation of an execution structure representing multiple nodes of the computational job, and dependencies between the multiple nodes, where the nodes may be a single vertex or a group of vertices (such as a stage).Type: GrantFiled: May 3, 2021Date of Patent: October 17, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Pu Li, Omid Afnan, Dian Zhang
-
Patent number: 11789778Abstract: An FPGA cloud platform acceleration resource allocates and coordinates accelerator card resources according to delays between a host of a user and FPGA accelerator cards deployed at various network segments. Upon an FPGA usage request of the user, allocating an FPGA accelerator card in an FPGA resource pool that has a minimum delay to the host. A cloud monitoring management platform obtains_transmission delays to a virtual machine network according to different geographic locations of various FPGA cards in the FPGA resource pool, and allocating a card having a minimum delay to each user. The cloud monitoring management platform prevents unauthorized users from accessing acceleration resources in the resource pool. The invention protects FPGA accelerator cards that are not authorized for users, and ensures that the card allocated to a user has a minimum network delay, thereby optimizing acceleration performance, and improving user experience.Type: GrantFiled: December 30, 2019Date of Patent: October 17, 2023Assignee: Inspur Suzhou Intelligent Technology Co., Ltd.Inventors: Zhixin Ren, Jiaheng Fan
-
Patent number: 11775352Abstract: Methods and apparatuses are described for automated prediction of computing resource performance scaling using reinforcement learning. A server executes performance tests against a production computing environment comprising a plurality of computing layers to capture performance data for computing resources in the production environment, where the performance tests are configured according to transactions-per-second (TPS) values. The server trains a classification model using the performance data, the trained model configured to predict computing power required by the plurality of computing layers. The server identifies a target TPS value and a target cost tolerance for the production environment and executes the trained classification model using the target TPS value and the target cost tolerance as input to generate a prediction of computing power required by the plurality of computing layers.Type: GrantFiled: November 28, 2022Date of Patent: October 3, 2023Assignee: FMR LLCInventors: Nikhil Krishnegowda, Saloni Priyani, Samir Kakkar
-
Patent number: 11768698Abstract: A storage device is disclosed. The storage device may include storage for data and at least one Input/Output (I/O) queue for requests from at least one virtual machine (VM) on a host device. The storage device may support an I/O queue creation command to request the allocation of an I/O queue for a VM. The I/O queue creation command may include an LBA range attribute for a range of Logical Block Addresses (LBAs) to be associated with the I/O queue. The storage device may map the range of LBAs to a range of Physical Block Addresses (PBAs) in the storage.Type: GrantFiled: June 7, 2021Date of Patent: September 26, 2023Inventor: Oscar P. Pinto
-
Patent number: 11768708Abstract: To provide a media data processing system that can suppress a decrease in a request processing rate while suppressing an increase in response time in media data recognition processing where it is difficult to properly estimate the load. A first load estimation unit 5 estimates range of processing load of media data recognition processing based on header information of media data. A determination unit 31 determines whether to allow or disallow execution of the media data recognition processing, or to estimate the processing load, based on the range of the processing load. A second load estimation unit 6 estimates the processing load of the media data recognition processing based on content of the media data when it is determined to estimate the processing load. The determination unit 31 determines whether to allow or disallow the execution of the media data recognition processing based on the processing load.Type: GrantFiled: November 19, 2018Date of Patent: September 26, 2023Assignee: NEC CORPORATIONInventor: Yosuke Iwamatsu