Patents Examined by Camquy Truong
-
Patent number: 11748150Abstract: A system and method for blocking path detection is provided. A job comprises tasks with at least some of the tasks dependent on other task(s). Each task is assigned to an ownership team. At a particular point in execution of the job, states of tasks of the job are identified. The states include one of three mutually exclusive states: waiting for another task/finished, in progress, and, blocked. When all the tasks with identified states of in progress or blocked are assigned to a particular ownership team, the particular ownership team is identified as on a blocking path. An action can be performed regarding the blocking path, for example, selected in accordance with policy-defined response actions such as generating an incident, escalating an existing incident, and/or sending a notification (e.g., an accumulated time on the blocking path can be calculated for each team with team(s) notified when certain threshold(s) are exceeded).Type: GrantFiled: May 14, 2019Date of Patent: September 5, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Vitalii Tsybulnyk, Arka Dasgupta, Marwan Elias Jubran, Clifford Thomas Dibble
-
Patent number: 11663037Abstract: The application discloses a service information processing method, apparatus, device, and computer storage medium, relating to the technical field of cloud computing. The specific implementation scheme is: sending polling information to a target process of a service running in a container according to a set time interval, wherein the target process is one of a plurality of processes running in the container; receiving reply information returned by the target process in response to the polling information; and obtaining the survival status of the target process according to the reply information.Type: GrantFiled: March 24, 2021Date of Patent: May 30, 2023Inventors: Jie Zhao, Jian Tian, Shuailong Li
-
Patent number: 11656890Abstract: A method includes provisioning a first Virtual Network Function (VNF) component on a first virtual machine, the first virtual machine being supported by a first physical computing system, provisioning a second VNF component directly on a second physical computing system, and using, within a telecommunications network, a VNF that includes both the first VNF component running on the first virtual machine and the second VNF component running directly on the second physical computing system. The method further includes, with a VNF manager, determining that a third VNF component should be provisioned, and in response to determining that the third VNF component is capable of utilizing a hardware accelerator associated with a third physical computing system, implementing the third VNF component on the third physical computing system.Type: GrantFiled: April 8, 2019Date of Patent: May 23, 2023Assignee: RIBBON COMMUNICATIONS OPERATING COMPANY, INC.Inventor: Paul Miller
-
Patent number: 11645111Abstract: The present disclosure provides a computer-implemented method, computer system and computer program product for managing a task flow. According to the computer-implemented method, a definer module may receive a request for executing a task flow. The definer module may determine a cluster of edge devices to execute the task flow from a set of edge devices. The definer module may retrieve metadata information for the task flow and edge devices in the cluster, wherein the metadata information is used to schedule the task flow in the cluster. Then the edge devices in the cluster may execute the task flow according to the metadata information.Type: GrantFiled: October 23, 2020Date of Patent: May 9, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Yue Wang, Xin Peng Liu, Liang Wang, Zheng Li, Wei Wu
-
Patent number: 11620158Abstract: A master-slave scheduling system, comprising (a) a master DRL unit comprising: (i) a queue containing a plurality of item-representations; (ii) a master policy module configured to select a single item-representation from the queue and submit to the slave unit; (iii) a master DRL agent configured to (a) train the master policy module; and (b) receive an updated item-representation from the slave unit, and update the queue; (b) The slave DRL unit comprising: (i) a slave policy module receiving a single item-representation, selecting a single task entry and submitting to a slave environment for performance; (ii) a slave DRL agent configured to: (a) train the slave policy module; (b) receive an item-representation from the master DRL unit, and submit to the slave policy module; (c) receive an updated item-representation from the slave's environment, and submit the same to the master DRL unit; and (iii) the slave DRL agent.Type: GrantFiled: January 14, 2021Date of Patent: April 4, 2023Assignee: B.G. NEGEV TECHNOLOGIES & APPLICATIONS LTD. AT BEN-GURION UNIVERSITYInventors: Gilad Katz, Asaf Shabtai, Yoni Birman, Ziv Ido
-
Patent number: 11609784Abstract: A method for distributing at least one computational process amongst shared resources is proposed. At least two shared resources capable of performing the computational process are determined. According to the method, a workload characteristic for each of the shared resources is predicted. The workload characteristic accounts for at least two subsystems of each shared resource. One of the at least two shared resources is selected based on the predicted workload characteristics.Type: GrantFiled: April 18, 2018Date of Patent: March 21, 2023Assignee: Intel CorporationInventors: Thijs Metsch, Leonard Feehan, Annie Ibrahim Rana, Rahul Khanna, Sharon Ruane, Marcin Spoczynski
-
Patent number: 11599384Abstract: A computing device (e.g., a mobile device) can execute a root process of an application to an initial point according to patterns of prior executions of the application. The root process can be one of many respective customized root processes of individual applications in the computing device. The device can receive a request to start the application from a user of the device. And, the device can start the application upon receiving the request to start the application and by using the root process of the application. At least one of the executing, receiving, or starting can be performed by an operating system in the device. The device can also fork the root process of the application into multiple processes, and can start upon receiving the request to start the application and by using at least one of the multiple processes according to the request to start the application.Type: GrantFiled: October 3, 2019Date of Patent: March 7, 2023Assignee: Micron Technology, Inc.Inventors: Dmitri Yudanov, Samuel E. Bradshaw
-
Patent number: 11593162Abstract: A method of managing operation of a computing device is provided. The method includes (a) running a system scheduler that schedules execution of a first application and a second application on a central processing unit (CPU) core of the computing device; (b) while the first application is executing on the core, detecting, by the first application, a context-switch opportunity; and (c) issuing, by the first application in response to detecting the context-switch opportunity, a blocking operation that triggers the system scheduler to perform a rescheduling operation between the first and second applications on the CPU core. An apparatus, system, and computer program product for performing a similar method are also provided.Type: GrantFiled: October 20, 2020Date of Patent: February 28, 2023Assignee: EMC IP Holding Company LLCInventors: Lior Kamran, Amitai Alkalay, Liran Loya
-
Patent number: 11593161Abstract: Embodiments described herein relate to a method for new endpoint addition. The method may include receiving, during execution of a migration workflow, a request to add a new endpoint to the migration workflow. The execution of the migration workflow includes performing a first migration job associated with a first consistency group and assigned a first priority; making a first determination that the first priority is higher than a priority threshold; based on the first determination, completing the first migration job; performing, after completing the first migration job, a new endpoint addition action set. The method may also include adding, based on the new endpoint migration job priority, the new endpoint migration job to a queue of remaining migration jobs of the migration workflow.Type: GrantFiled: October 16, 2020Date of Patent: February 28, 2023Assignee: EMC IP Holding Company LLCInventors: Suren Kumar, Senthil Kumar Urkavalan, Vinod Durairaj
-
Patent number: 11567790Abstract: Systems, devices, and methods are disclosed herein for containerized scalable storage applications. Methods may include instantiating an application instance based on a plurality of application instance parameters, the application instance being configured to utilize a plurality of storage volumes implemented in a storage cluster. Methods may also include enumerating a plurality of unattached storage volumes included in the cluster associated with the application instance, the plurality of unattached storage volumes having a plurality of underlying physical storage devices, and the plurality of unattached storage volumes being identified based on a plurality of application instance parameters. The methods may further include attaching at least some of the plurality of unattached storage volumes to the application instance, wherein the attaching enables the application instance to access data stored in the attached storage volumes.Type: GrantFiled: January 29, 2020Date of Patent: January 31, 2023Assignee: Pure Storage, Inc.Inventors: Goutham Rao, Vinod Jayaraman, Ganesh Sangle
-
Patent number: 11561824Abstract: Various aspects are disclosed for distributed application management using an embedded persistent queue framework. In some aspects, task execution data is monitored from a plurality of task execution engines. A task request is identified. The task request can include a task and a Boolean predicate for task assignment. The task is assigned to a task execution engine embedded in a distributed application process if the Boolean predicate is true, and a capacity of the task execution engine is sufficient to execute the task. The task is enqueued in a persistent queue. The task is retrieved from the persistent queue and executed.Type: GrantFiled: March 15, 2020Date of Patent: January 24, 2023Assignee: VMWARE, INC.Inventors: Srinivas Neginhal, Medhavi Dhawan, Gaurav Sharma, Rajneesh Bajpai
-
Patent number: 11561839Abstract: It is presented a method for enabling allocation of resources for a plurality of hosts. The method is performed by a server (1) and comprises identifying (S100) a service running on one or more of the plurality of hosts, determining (S140) a stretch factor for a recurring load pattern for the service running on the one or more of the plurality of hosts, and storing (S150) the identified service together with the determined stretch factor. It is also presented a server, a computer program and a computer program product.Type: GrantFiled: December 21, 2016Date of Patent: January 24, 2023Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Tony Larsson, Ignacio Manuel Mulas Viela, Nicolas Seyvet
-
Patent number: 11556458Abstract: Examples described herein include systems and methods for fuzz testing low-level virtual devices and virtual devices with DMA write functionality. A fuzz tester includes components distributed across a virtual machine and its host system. The fuzz testing components in the virtual machine are implemented as firmware installed in the virtual machine's ROM. These components operate independent of data stored in the virtual machine's RAM and do not require an operating system to be installed on the virtual machine. As a result, any changes made to the virtual machine's RAM during the fuzzing process by low-level virtual devices or virtual devices with DMA write functionality cannot interrupt the fuzz testing or otherwise negatively impact the fuzz tester itself.Type: GrantFiled: July 26, 2019Date of Patent: January 17, 2023Assignee: VMware, Inc.Inventor: Darius Davis
-
Patent number: 11544098Abstract: Methods and systems for diagnosis of live virtual server performance data are disclosed. In one embodiment, an exemplary method comprises receiving a request to assign a first role to at least one virtual server; configuring the virtual server to associate the first role with a first resource of the virtual server; modifying a database to include an identifier associated with the virtual server and an identifier of the first role assigned to the virtual server; receiving indications of first resource usage; mapping the first resource usage to the first role; storing the indications of first resource usage; associating a change in first resource usage with a corresponding first resource operation; modifying a user interface element for presentation on a web page to include the first resource usage; receiving a request for the web page from a user; and delivering the web page to a user interface.Type: GrantFiled: September 23, 2020Date of Patent: January 3, 2023Assignee: Coupang Corp.Inventor: Tae Kyung Kim
-
Patent number: 11537445Abstract: A computer-implemented method for deploying an application between an on-premise server and an off-premise server includes identifying a plurality of nodes in a flow of an application deployed on the on-premise server. The computer-implemented method further includes splitting the flow at the plurality of nodes to form a plurality of sub-flows of the application. The computer-implemented method further includes routing a flow execution workload of the application to the plurality of sub-flows of the application.Type: GrantFiled: September 17, 2019Date of Patent: December 27, 2022Assignee: International Business Machines CorporationInventors: John Anthony Reeve, Trevor Clifford Dolby, Andrew John Coleman, Matthew E. Golby-Kirk
-
Patent number: 11526496Abstract: A distributed database architecture based on shared memory and multi-process includes a distributed database node. A system shared memory unit and a system process unit are built in a distributed database. The system shared memory unit includes a task stack information module and a shared cache module. A plurality of process tasks are built in the task stack information module. The process tasks include system information with various purposes in system process task information, and each system information corresponds to one process task. By using a system shared memory unit at a distributed database node, the number of user connections in the distributed database architecture does not have a corresponding relationship with the number of processes or threads. The number of processes or threads of the entire node does not increase as the number of user connections increases.Type: GrantFiled: October 19, 2020Date of Patent: December 13, 2022Assignee: GUIZHOU ESGYN INFORMATION TECHNOLOGY CO., LTD.Inventors: Xiaozhong Wang, Xianliang Ji, Zhenxing He, Yingshuai Li
-
Patent number: 11520613Abstract: A method for allocating a plurality of virtual machines (51-55) provided on at least one host (11-15) to a virtualized network function is provided, which provides a defined functional behavior in a network and requires a total application capacity for the functional behavior, the functional behavior being provided by needed virtual machines from the plurality of virtual machines, wherein each of the at least one host has an available processing capacity which can be assigned to the virtual machines provided on the corresponding host, and each virtual machine has at least one flavor which indicates a used processing capacity of the available processing capacity of the corresponding host and which corresponds to a partial application capacity of the total application capacity provided by the corresponding virtual machine, the method comprising: —determining the total application capacity of the virtualized network function, —determining, for each of the virtual machines, the at least one flavor taking into accType: GrantFiled: June 2, 2017Date of Patent: December 6, 2022Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Giuseppe Celozzi, Luca Baldini, Daniele Gaito, Gaetano Patria
-
Patent number: 11520672Abstract: An anomaly detection device according to the embodiment includes a prediction unit and an anomaly score calculation unit. The prediction unit performs a process to obtain, at each time step of the time series data of m dimensions, distribution parameters required to express a continuous probability distribution representing a distribution state of predicted values that can be obtained at a time step t of the time series data of m dimensions. The anomaly score calculation unit performs a process to calculate, using distribution parameters obtained by the prediction unit, an anomaly score corresponding to an evaluation value representing evaluation of a magnitude of anomaly in an actual measurement value at the time step t of time series data of m dimensions.Type: GrantFiled: September 25, 2019Date of Patent: December 6, 2022Assignees: Kabushiki Kaisha Toshiba, Toshiba Digital Solutions CorporationInventor: Toshiyuki Katou
-
Patent number: 11513579Abstract: Selection and serving of content items may include receiving data indicative of a status of an energy source of a device with a request for a content item. A first received content item may be associated with a first energy consumption level and a second received content item may be associated with a second energy consumption level. The accessed content items are responsive to the request for a content item. The first energy consumption level may be higher than the second energy consumption level. The first content item or the second content item may be selected based, at least in part, on the received data indicative of the status of the energy source of the device, and data to display the selected content item may be provided to the device.Type: GrantFiled: March 16, 2020Date of Patent: November 29, 2022Assignee: GOOGLE LLCInventors: Hareesh Nagarajan, Surojit Chatterjee
-
Patent number: 11513863Abstract: The present disclosure provides a game server architecture. A gateway server can pull operation data of game servers and write the operation data into a load balancing cluster to balance load of the game servers, avoiding the crash of the game servers due to excessive load of a certain game server. A database cluster uses data identification segments to balance an amount of data by weight. When the amount of data generated by the game servers or a logic server group is too large, the database cluster can be dynamically expanded to meet storage requirements of the larger amount of data. In addition, when data in the database cluster and a consistent-hash-based cache server is accessed, the data is preferentially accessed from the consistent-hash-based cache server, to avoid a problem that each data access is done through the database, which causes a huge pressure on database IO.Type: GrantFiled: November 17, 2020Date of Patent: November 29, 2022Assignee: Shanghai Lilith Technology CorporationInventor: Xiaolin Guo