Patents Examined by Emerson C Puente
  • Patent number: 11645124
    Abstract: To be capable of concurrent execution of a function group not in data conflict by a plurality of cores and to execute a function pair in data conflict in a temporal separation manner. A process barrier 20 includes N?1 checker functions 22 and one limiter function 23, where the number of cores capable of concurrently executing the functions is N (N is an integer equal to or greater than 2), the checker functions 22 determine whether the head entry of a lock-free function queue LFQ1 is either the checker function 22 or the limiter function 23, and repeats reading of the head entry of the lock-free function queue LFQ1 if either, and ends processing if neither, and the limiter function 23 is an empty function ending without performing any processing.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: May 9, 2023
    Assignee: Hitachi Astemo, Ltd.
    Inventors: Masataka Nishi, Tomohito Ebina, Kazuyoshi Serizawa
  • Patent number: 11645013
    Abstract: Systems and methods for managing conflicting background tasks in a dispersed storage network are provided. In embodiments, a method includes: gathering scheduled future task data for scheduled future tasks from a plurality of task scheduling modules within a dispersed storage network, wherein the scheduled future tasks are tasks associated with stored data objects; monitoring the scheduled future task data for scheduling conflicts based on stored rules; determining that a scheduling conflict exists between a first future task of the scheduled future tasks and a second future task of the scheduled future tasks; issuing instructions to at least one of the plurality of task scheduling modules to update the first future task or the second future task based on the scheduling conflict; and updating, by the at least one of the plurality of task scheduling modules, the first future task or the second future task based on the instructions.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: May 9, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Praveen Viraraghavan, Adam Gray, Tyler Kenneth Reid, Peter Kim, Fnu Manupriya, Anuraag Shah, Sridhar Gopalam, David Brittain Bolen, Bruno Cabral
  • Patent number: 11630696
    Abstract: The present disclosure relates to a messaging method for a hardware acceleration system. The method includes determining exchange message types to be exchanged with a hardware accelerator in accordance with an application performed by the hardware acceleration system. The exchange message types indicate a number of variables, and a type of the variables, of the messages. The method also includes selecting schemas from a schema database. The message type schemas indicates a precision representation of variables of messages associated with the schema. The selected schemas correspond to the determined exchange message types. Further, the method includes configuring a serial interface of the hardware accelerator in accordance with the selected schemas, to enable a message exchange including the messages.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: April 18, 2023
    Assignee: International Business Machines Corporation
    Inventors: Dionysios Diamantopoulos, Mitra Purandare, Burkhard Ringlein, Christoph Hagleitner
  • Patent number: 11625282
    Abstract: Systems and methods are provided for remote submission and execution of machine learning models. Embodiments in accordance with the present disclosure enable an instance of a notebook client running on a user terminal to remotely submit model code entered into the cells of the instance to a selected training cluster. The instance is instantiated without configuring the instance with a specific compute engine. A management system communicatively coupled to the user terminal and the training clusters maintains a data structure including configuration parameters for the training clusters. The instance receives a selection of a training cluster and is provided with the configuration parameters from the management system for the selected training cluster for attaching the training cluster to the instance.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: April 11, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Kartik Mathur, Ritesh Jaltare, Saurabh Jogalekar
  • Patent number: 11625268
    Abstract: This invention relates to computer engineering and operating system architecture, in particular, it discloses a new method of interaction among operating system components and tasks by means of an interface bus. It introduces OS' interface bus element being part of kernel and acting in similar way as known standard device interface bus but for all OS' components and tasks. Besides, the invention further expands the bus functions by possibility of simultaneous execution of components created for different generations of OS and its microkernels, providing for applications compatibility with any OS and microkernel versions without recompilation, saving user investments, reducing application developer software maintenance costs, and providing for OS component reuse. This result is conditioned by the use of unique components identifiers taking into account their generations and creation of interface bus access interfaces corresponding to OS components generations.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: April 11, 2023
    Assignee: LIMITED LIABILITY COMPANY “PEERF”
    Inventors: Vladimir Nikolaevich Bashev, Nikolay Olegovich Ilyin
  • Patent number: 11625285
    Abstract: Techniques are provided for assigning workloads in a multi-node processing environment using resource allocation feedback from each node. One method comprises obtaining feedback from distributed nodes that process workloads, wherein the feedback for a given node indicates (i) an allocation of resources, and (ii) a number of executing workloads. In response to receiving a given workload to be processed, candidate nodes are identified to execute the given workload; and the given workload is assigned to a given candidate node based on an amount of available resources on each candidate node and/or a stability of resource adjustments made for each candidate node. The stability of the resource adjustments made for each candidate node can be evaluated based on a maximum resource adjustment made for a given candidate node relative to a maximum resource adjustment made for each of the candidate nodes.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: April 11, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Eduardo Vera Sousa, Edward José Pacheco Condori, Tiago Salviano Calmon, Vinícius Michel Gottin
  • Patent number: 11620155
    Abstract: A device may receive a job request associated with a data processing job, including job timing data specifying a time at which the data processing job is to be executed by a virtual computing environment. The device may receive user data associated with the job request and validate the data processing job based on the user data. In addition, the device may identify a priority associated with the data processing job, based on the user data and the job timing data. The device may provide, to a job queue, job data that corresponds to the data processing job, and monitor the virtual computing environment to determine when virtual resources are available. The device may also determine, based on the monitoring, that a virtual resource is available and, based on the determination and the priority, provide the virtual resource with data that causes execution of the data processing job.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: April 4, 2023
    Assignee: Capital One Services, LLC
    Inventors: Ming Yuan, Vijayalakshmi Veeraraghavan, Preet Kamal Bawa, Lance Creath, Alec Fekete
  • Patent number: 11609787
    Abstract: The present disclosure relates to an FPGA-based dynamic graph processing method, comprising: where graph mirrors of a dynamic graph that have successive timestamps define an increment therebetween, a pre-processing module dividing the graph mirror having the latter timestamp into at least one path unit in a manner that incremental computing for any vertex only depends on a preorder vertex of that vertex; an FPGA processing module storing at least two said path units into an on-chip memory directly linked to threads in a manner that every thread unit is able to process the path unit independently; the thread unit determining an increment value between the successive timestamps of the preorder vertex while updating a state value of the preorder vertex, and transferring the increment value to a succeeding vertex adjacent to the preorder vertex in a transfer direction determined by the path unit, so as to update the state value of the succeeding vertex.
    Type: Grant
    Filed: July 16, 2020
    Date of Patent: March 21, 2023
    Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Xiaofei Liao, Yicheng Chen, Yu Zhang, Hai Jin, Jin Zhao, Xiang Zhao, Beibei Si
  • Patent number: 11586454
    Abstract: A guest operating system (OS) of a virtual machine (VM) receives a first request from an application to enable memory deduplication for a memory page associated with the application, identifies a mergeable memory range for memory space of the guest OS, where the mergeable memory rage is associated with guest OS memory pages to be deduplicated, and maps, in a page table of the guest OS, a page table entry for the memory page to a memory address within the mergeable memory range. The guest OS causes a hypervisor to enable deduplication for the memory page responsive to detecting an access of the memory page by the application.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: February 21, 2023
    Assignee: Red Hat, Inc.
    Inventors: Michael Tsirkin, Andrea Arcangeli
  • Patent number: 11573815
    Abstract: Systems and methods for supporting dynamic power management states for virtual machine (VM) migration are disclosed. In one implementation, a processing device may generate, by a host computer system, a host power management data structure specifying a plurality of power management states of the host computer system. The processing device may also detect that a VM has been migrated to the host computer system. The processing device may then prevent the VM from performing power management operations and may cause the virtual machine to read the host power management data structure. Responsive to receiving a notification that the VM has read the host power management data structure, the processing device may enable the VM to enter a first power management state of the plurality of power management states.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: February 7, 2023
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 11573823
    Abstract: In an approach, a processor, responsive to requesting to perform a plurality of applications including a first application and a second application, determines that the first application and the second application have been performed sequentially during a previous time period. A processor, responsive to determining that the first and second applications have been performed in sequence during the previous time period, obtains, a first set of database operations associated with the first application and a second set of database operations associated with the second application. A processors, responsive to determining that the first set of database operations and the second set of database operations are free of conflict, generates an execution schedule indicating that the first application and the second application are to be performed in parallel. A processors performs the plurality of applications based on the execution schedule.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: February 7, 2023
    Assignee: International Business Machines Corporation
    Inventors: Shuo Li, ShengYan Sun, Xiaobo Wang, Hong Mei Zhang
  • Patent number: 11567792
    Abstract: An instruction to generate a cloud instantiation of a secondary storage system is provided. One or more secondary storage clusters are virtually rebuilt in the cloud instantiation of the secondary storage system. A new cloud instance of a user virtual machine is deployed based on at least a portion of data stored in the one or more rebuilt secondary storage clusters of the cloud instantiation of the secondary storage system. A version of at least the portion of the data of the one or more rebuilt secondary storage clusters is provided to a cloud deployment server.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: January 31, 2023
    Assignee: Cohesity, Inc.
    Inventors: Venkata Ranga Radhanikanth Guturi, Tushar Mahata, Praveen Kumar Yarlagadda, Vipin Gupta
  • Patent number: 11567799
    Abstract: Described herein is a system and method for determining the status of instances and update applications to reflect the updated statuses of instances, in real-time. In an embodiment, each instance may enable a service to determine the status of an instance. A core application server may load server pool configurations including a status of an instance. The status indicates the instance is live. Core application server may read a gate definition of the instance using the service enabled on the instance. Core application server may determine that a current status of the instance is virtual, based on the gate definition of the instance. The core application may a local cache of core application server to reflect that the current status of the instance is virtual and propagate to applications executed on other instances and core application server.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: January 31, 2023
    Assignee: Salesforce, Inc.
    Inventors: Kranthi Baddepuri, Siang Hao Darren Poh
  • Patent number: 11561838
    Abstract: Systems and methods are provided for fail-safe loading of information on a user interface, comprising receiving, via a modular platform, requests for access to a mobile application platform from a plurality of mobile devices, opening and directing the requests for access to the mobile application platform to a sequential processor of an application programming interface (API) gateway when a parallel processor of the API gateway is unresponsive to requests for access to the mobile application platform for a predetermined period of time, periodically checking a status of the parallel processor, and redirecting the requests for access to the mobile application platform to the parallel processor when the parallel processor is capable of processing requests for access to the mobile application platform.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: January 24, 2023
    Assignee: Coupang Corp.
    Inventors: Yong Seok Jang, Hong Gwi Joo
  • Patent number: 11561825
    Abstract: A cloud computer system is provided that includes a plurality of computer devices and a database. The plurality of computer devices execute a plurality of virtual machines, with one of the virtual machines serving as a controller node and the remainder serving as worker instances. The controller node is programmed to accept a request to initiate a distributed process that includes a plurality of data jobs, determine a number of worker instances to create across the plurality of computer devices, and cause the number of worker instances to be created on the plurality of computer devices. The worker instances are programmed to create a unique message queue for the corresponding worker instance, and store a reference for the unique message queue that was created for the corresponding worker to the database. The controller node retrieves the reference to the unique message queues and posts jobs to the message queues for execution by the worker instances.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: January 24, 2023
    Assignee: NASDAQ TECHNOLOGY AB
    Inventor: Jonas Nordin
  • Patent number: 11556363
    Abstract: Techniques for transferring virtual machines and resource management in a virtualized computing environment are described. In one embodiment, for example, an apparatus may include at least one memory, at least one processor, and logic for transferring a virtual machine (VM), at least a portion of the logic comprised in hardware coupled to the at least one memory and the at least one processor, the logic to generate a plurality of virtualized capability registers for a virtual device (VDEV) by virtualizing a plurality of device-specific capability registers of a physical device to be virtualized by the VM, the plurality of virtualized capability registers comprising a plurality of device-specific capabilities of the physical device, determine a version of the physical device to support via a virtual machine monitor (VMM), and expose a subset of the virtualized capability registers associated with the version to the VM. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: January 17, 2023
    Assignee: INTEL CORPORATION
    Inventors: Sanjay Kumar, Philip R. Lantz, Kun Tian, Utkarsh Y. Kakaiya, Rajesh M. Sankaran
  • Patent number: 11556390
    Abstract: The present disclosure relates to systems and methods to implement efficient high-bandwidth shared memory systems particularly suited for parallelizing and operating large scale machine learning and AI computing systems necessary to efficiently process high volume data sets and streams.
    Type: Grant
    Filed: October 2, 2019
    Date of Patent: January 17, 2023
    Assignee: Brainworks Foundry, Inc.
    Inventors: Phillip Alvelda, VII, Markus Krause, Todd Allen Stiers
  • Patent number: 11556391
    Abstract: One or more aspects of the present disclosure relate to service level input/output scheduling to control central processing unit (CPU) utilization. Input/output (I/O) operations are processed with one or more of a first CPU pool and a second CPU pool of two or more CPU pools. The second CPU pool processes I/O operations that are determined to stall any of the CPU cores.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: January 17, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: John Creed, Owen Martin, Andrew Chanler
  • Patent number: 11550614
    Abstract: Techniques for packaging and deploying algorithms utilizing containers for flexible machine learning are described. In some embodiments, users can create or utilize simple containers adhering to a specification of a machine learning service in a provider network, where the containers include code for how a machine learning model is to be trained and/or executed. The machine learning service can automatically train a model and/or host a model using the containers. The containers can use a wide variety of algorithms and use a variety of types of languages, libraries, data types, etc. Users can thus implement machine learning training and/or hosting with extremely minimal knowledge of how the overall training and/or hosting is actually performed.
    Type: Grant
    Filed: October 9, 2020
    Date of Patent: January 10, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas Albert Faulhaber, Jr., Gowda Dayananda Anjaneyapura Range, Jeffrey John Geevarghese, Taylor Goodhart, Charles Drummond Swan
  • Patent number: 11550513
    Abstract: Container images are managed in a clustered container host system with a shared storage device. Hosts of the system each include a virtualization software layer that supports execution of virtual machines (VMs), one or more of which are pod VMs that have implemented therein a container engine that supports execution of containers within the respective pod VM. A method of deploying containers includes determining, from pod objects published by a master device of the system and accessible by all hosts of the system, that a new pod VM is to be created, creating the new pod VM, and spinning up one or more containers in the new pod VM using images of containers previously spun up in another pod VM, wherein the images of the containers previously spun up in the other pod VM are stored in the storage device.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: January 10, 2023
    Assignee: VMware, Inc.
    Inventor: Benjamin J. Corrie