Patents Examined by Camquy Truong
  • Patent number: 11068305
    Abstract: Representative apparatus, method, and system embodiments are disclosed for a self-scheduling processor which also provides additional functionality. Representative embodiments include a self-scheduling processor, comprising: a processor core adapted to execute a received instruction; and a core control circuit adapted to automatically schedule an instruction for execution by the processor core in response to a received work descriptor data packet. In another embodiment, the core control circuit is also adapted to schedule a fiber create instruction for execution by the processor core, to reserve a predetermined amount of memory space in a thread control memory to store return arguments, and to generate one or more work descriptor data packets to another processor or hybrid threading fabric circuit for execution of a corresponding plurality of execution threads. Event processing, data path management, system calls, memory requests, and other new instructions are also disclosed.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: July 20, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 11070628
    Abstract: Systems and methods for storage resource and computation resource expansion. A method embodiment includes migrating a computing task from an external computing environment to a different computing/storage environment. The method commences by identifying a storage system having virtualized controllers and by identifying a computing device that performs a workload that interfaces with the storage system. The virtualized controllers execute in the second computing environment to manage access to storage target devices by accessing a storage target device identified by an IP address. A particular virtualized controller that is connected to the storage target device is selected and configured to process storage I/O from a migrated workload. A user virtual machine or user executable container is configured to execute the workload on one of the nodes in the computing and storage system within the second computing environment.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: July 20, 2021
    Assignee: Nutanix, Inc.
    Inventors: Tabrez Memon, Jaya Singhvi, Miao Cui, Binny Sher Gill
  • Patent number: 11061703
    Abstract: A portion of a native memory is configured as a buffer within a native execution environment. Execution of a managed runtime code is initiated by a virtual machine. Data from a managed runtime memory of the virtual machine is marshaled by the virtual machine into the buffer. Control of execution is transferred from the managed runtime code to the native code. The native code is executed. The native code operates directly upon the marshaled data in the buffer.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: July 13, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Andrew James Craik
  • Patent number: 11061715
    Abstract: A technique for operating a computer system to support an application, a first application server environment, and a second application server environment includes intercepting a work request relating to the application issued to the first application server environment prior to execution of the work request. A thread adapted for execution in the first application server environment is created. A context is attached to the thread that non-disruptively modifies the thread into a hybrid thread that is additionally suitable for execution in the second application server environment. The hybrid thread is returned to the first application server environment.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: July 13, 2021
    Assignee: International Business Machines Corporation
    Inventors: Fraser Bohm, Ivan D. Hargreaves, Julian Horn, Ian J. Mitchell
  • Patent number: 11061729
    Abstract: Systems and methods for throttling logging processes in presence of system resource contention. Logging processes that contend with non-logging processes for resources can sometimes be throttled to more equitably share system resources. A method embodiment commences by establishing a set of throttling rules that are to be observed by the logging processes running on the system. While logging processes and non-logging processes are running, a monitor records system resource usage and other system conditions. When a process manager determines that the resources consumed by the combination of the logging processes and the non-logging processes exceed a threshold, then any currently-applicable throttling rules fire so as to prescribe throttling levels.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: July 13, 2021
    Assignee: Nutanix, Inc.
    Inventors: Bhawani Singh, Rachit Sinha, Buchibabu Chennupati
  • Patent number: 11055130
    Abstract: A method including accessing a work control structure (WCS) configured “first-in-first-out” holding work control records (WCRs) each including a field defining work to be carried out and a completion indicator indicating whether the work has completed, and initially set to indicate that the work has not completed: upon fetching a work request (WR) for execution, pushing a WCR corresponding to the WR to the WCS, and: A) inspecting the WCR at a head of the WCS, B) when the completion indicator of the WCR at the head of the WCS indicates that the unit of work associated with the WCR at the head of the WCS has been completed, popping the WCR at the head of the WCS from the WCS, and reporting completion of the WCR at the head of the WCS to a host processor, and C) iteratively performing A, B, and C. Related apparatus and methods are also provided.
    Type: Grant
    Filed: September 15, 2019
    Date of Patent: July 6, 2021
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Ariel Shahar, Roee Moyal
  • Patent number: 11048535
    Abstract: A method for transmitting a data packet based on a virtual machine is provided. A direct through-connection is established between the virtual machine and a network interface card. A data packet transmitted by a driver layer of the virtual machine is detected. An encapsulation parameter obtaining request is transmitted to a virtual machine monitor corresponding to the virtual machine, and encapsulation information and an encapsulation parameter are received in response to the encapsulation parameter obtaining request. The data packet is encapsulated according to the encapsulation information and the encapsulation parameter, and the encapsulated data packet is added to a hardware transmitting queue of the network interface card by using the direct through-connection to transmit the encapsulated data packet.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: June 29, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Hua Liu
  • Patent number: 11048539
    Abstract: An example computer system is provided that utilizes an agent to operate autonomously in transitioning virtual machines between an active state and an inactive state. In an implementation, a processor executes instructions to cause a computer system to implement a virtual machine communication interface, and to host a virtual machine. In some examples, the processor executes instructions to cause the computer system to utilize an agent to operate autonomously in transitioning the virtual machine from an active state to an inactive state after a period of inactivity. In other examples, the processor may execute instructions to cause the computer system to transition the virtual machine from the inactive state back to the active state upon receiving a data packet targeted for the inactive virtual machine.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: June 29, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Rupesh Shantamurty
  • Patent number: 11042470
    Abstract: A device may be run in a timing testing mode in which the device is configured to disrupt timing of processing that takes place on the one or more processors while running an application with the one or more processors. The application may be tested for errors while the device is running in the timing testing mode.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: June 22, 2021
    Assignee: SONY INTERACTIVE ENTERTAINMENT LLC
    Inventors: Mark Evan Cerny, David Simpson
  • Patent number: 11042405
    Abstract: Techniques for scheduling and executing functions across a plurality of different Functions-as-a-Service (FaaS) infrastructures are provided. In one set of embodiments, a computer system can determine that a function has been invoked, where the computer system implements a spanning FaaS service platform that is communicatively coupled with the plurality of different FaaS infrastructures. In response, the computer system can retrieve metadata associated with the function, where the metadata includes criteria or policies indicating how the function should be scheduled for execution, and can retrieve information associated with each of the plurality of different FaaS infrastructures, where the information includes capabilities or characteristics of each FaaS infrastructure.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: June 22, 2021
    Assignee: VMware, Inc.
    Inventors: Berndt Jung, Mark Peek, Xueyang Hu, Ivan Mikushin, Karol Stepniewski
  • Patent number: 11036562
    Abstract: A method comprises: obtaining service data identifier information of a data record of streaming data, a to-be-processed real-time value of the data record, and a time sequence characteristic of the to-be-processed real-time value of the data record, the identifier information representing service data; obtaining a time sequence characteristic of a processed real-time value of the service data based on a correspondence relationship between the service data identifier information and the time sequence characteristic of the processed real-time value; and comparing the time sequence characteristic of the to-be-processed real-time value and the time sequence characteristic of the processed real-time value, and in response to that the time sequence characteristic of the to-be-processed real-time value is later than the time sequence characteristic of the processed real-time value, updating the time sequence characteristic of the processed real-time value to the time sequence characteristic of the to-be-processed re
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: June 15, 2021
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.
    Inventors: Chenglin Feng, Liang Luo
  • Patent number: 11036535
    Abstract: A data storage method and a physical server are provided. M virtual machines are deployed on a plurality of physical servers. The M virtual machines are respectively deployed as M data nodes in a distributed storage system. A metadata node in the distributed storage system receives a data storage request of a client, and determines identifiers of N virtual machines from the M virtual machines based on stored grouping information. The grouping information records a mapping relationship between a plurality of anti-affinity groups and identifiers of the M virtual machines.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: June 15, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yong Zhong, Ming Lin, Ruilin Peng, Jun Zhao
  • Patent number: 11023275
    Abstract: Technologies for managing a queue on a compute device are disclosed. In the illustrative embodiment, the queue is managed by a host fabric interface of the compute device. Queue operations such as enqueuing data onto the queue and dequeuing data from the queue may be requested by remote compute devices by sending queue operations which may be processed by the host fabric interface. The host fabric interface may, in some embodiments, fully manage the queue without any assistance from the processor of the compute device. In other embodiments, the processor of the compute device may be responsible for certain tasks, such as garbage collection.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: June 1, 2021
    Assignee: Intel Corporation
    Inventors: James Dinan, Mario Flajslik, Timo Schneider
  • Patent number: 11023998
    Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: June 1, 2021
    Assignee: Intel Corporation
    Inventors: Nicolas Kacevas, Niranjan L. Cooray, Madhura Joshi, Satyanarayana Nekkalapu
  • Patent number: 11016814
    Abstract: Embodiments generally relate to selecting a service instance in a service infrastructure. In some embodiments, a method includes sending, by a service registry, a status request to each service instance of a plurality of service instances, where the service registry maintains a data store of performance information associated each of the service instances. The method further includes receiving, by the service registry, a plurality of status responses, where each status response is received from a respective service instance of the plurality service instances, and where each status response includes one or more performance characteristics. The method further includes ranking, by the service registry, the service instances based at least in part on the one or more performance characteristics. The method further includes performing, by the service registry, service lookups based on the ranking.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: May 25, 2021
    Assignee: International Business Machines Corporation
    Inventors: Uwe Hansmann, Timo Kußmaul, David Winter, Hendrik Haddorp, Udo Schoene, Andreas Prokoph, Oliver Rudolph, Anke Lüdde
  • Patent number: 11006137
    Abstract: A scheduler of computer processes. The scheduler obtains predictions of a computing load of at least one multimedia process comprising real time video encoding or transcoding of a video in real time, including predictions of a target index of video quality to deliver the video over a period of time. Predictions of available computing capacities of a cluster are also retrieved. A determination is made, based on the predictions of the computing load and the predictions of the available computing capacities, of a processing capability to allocate the at least one multimedia process during the period of time. At least one virtual environment is created for the at least one multimedia process. The computing capacity of the at least one virtual environment is adapted to the predictions of the computing load of the at least one multimedia process during the period of time.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: May 11, 2021
    Assignee: Harmonic, Inc.
    Inventors: Eric Le Bars, Arnaud Mahe, Christophe Berthelot
  • Patent number: 10996974
    Abstract: Illustrative systems and methods enable a virtual machine (“VM”) to be powered up at any hypervisor regardless of hypervisor type, based on live-mounting VM data that was originally backed up into a hypervisor-independent format by a block-level backup operation. Afterwards, the backed up VM executes anywhere anytime without needing to find a hypervisor that is the same as or compatible with the original source VM's hypervisor. The backed up VM payload data is rendered portable to any virtualized platform. Thus, a VM can be powered up at one or more test stations, data center or cloud recovery environments, and/or backup appliances, without the prior-art limitations of finding a same/compatible hypervisor for accessing and using backed up VM data. An illustrative media agent maintains cache storage that acts as a way station for data blocks retrieved from an original backup copy, and stores data blocks written by the live-mounted VM.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: May 4, 2021
    Assignee: Commvault Systems, Inc.
    Inventors: Henry Wallace Dornemann, Amit Mitkar, Sanjay Kumar, Satish Chandra Kilaru, Sumedh Pramod Degaonkar
  • Patent number: 10990438
    Abstract: Disclosed are a method and apparatus for managing effectiveness of an information processing task in a decentralized data management system. The method comprising: sending requests for multiple information processing tasks by a client to multiple execution subjects, transmitting information processing tasks in a sequential information processing task list in an order to the multiple execution subjects; caching the requested information processing tasks to a task cache queue, caching the sequential information processing task list as a whole to the task cache queue; judging whether each information processing task in the task cache queue satisfies a predetermined conflict condition; moving the information processing task to a conflict task queue if it is determined that the task satisfies the predetermined conflict condition, deleting the task from the conflict task queue and caching the task to the task cache queue when the predetermined conflict condition is not satisfied.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: April 27, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Shenbin Zhang, Bingfeng Pi, Jun Sun
  • Patent number: 10983818
    Abstract: A method for preventing a dirty virtual machine from executing on an undesirable host server includes receiving, by a caching module provided for a first host server that hosts a virtual machine, a write data that is of the virtual machine and that is to be cached for the first host server. The virtual machine uses a virtual hard disk supporting hyper-V Virtual hard disk (VHDX) and virtual hard disk (VHD) file formats or any virtual file format with uniquely identifiable metadata. In response to the receipt of write data, the caching module provided for the first host server changes metadata of virtual hard disk files to a custom format before the virtual machine migrates from the first host server to a second host server, and the virtual machine becomes dirty as a result. When the dirty virtual machine sends a migration request to the second host server, a caching module provided for the second host server checks whether the custom format of the virtual hard disk files is identifiable.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: April 20, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Suresh Vishnoi, Kumar Vinod
  • Patent number: 10983825
    Abstract: This application provide a method of processing a process in a container. The method is used in a physical machine, multiple containers are deployed on the physical machine, the physical machine includes a watchdog drive, and the method includes: receiving, by the watchdog drive, a first operation instruction of a first container by using a dev which is a device file, where the first operation instruction includes a first process identification PID, and the first PID represents that the first operation instruction is delivered by a first process in the first container; determining, according to the first PID, first namespace corresponding to the first container; and deleting all processes in the first container according to the first namespace.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: April 20, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yufang Du, Yangyang Jiang