Virtual Machine Memory Addressing Patents (Class 711/6)
  • Patent number: 12045155
    Abstract: The present disclosure involves systems, software, and computer implemented methods for efficient memory leak detection in database systems. One example method includes receiving a query at a database system. Memory allocations and deallocations are traced during processing of the query. Each memory allocation entry in a tracing file can be processed, including determining, for each allocation, whether a memory deallocation entry exists in the tracing file. A determination can be made as to whether a memory leak has occurred in response to determining whether a memory deallocation entry corresponding to a memory allocation entry exists in the tracing file. For example, a determination can be made that a memory leak has occurred in response to determining that no memory deallocation entry corresponding to an allocated memory address exists in the tracing file. One or more actions can be performed in response to determining that a memory leak has occurred.
    Type: Grant
    Filed: February 15, 2023
    Date of Patent: July 23, 2024
    Assignee: SAP SE
    Inventors: Yinghua Ouyang, Zhen Tian
  • Patent number: 12045240
    Abstract: One example method includes scanning a storage device to obtain data and metadata concerning operation of a computing system, analyzing the data and, based on the analyzing, deriving data groups that include some of the data, and deriving data relationships among some of the data, receiving, by an expert system, a query from a user, and the query specifies a sample object for the expert system to investigate, but the query does not indicate purpose of the user in submitting the query, analyzing the query, based on the data groups and data relationships, and based on the analyzing of the query, generating, by the expert system, query results that comprise a set of user-selectable investigation directions that relate to the sample object, and presenting, by the expert system, the set of user-selectable investigation directions to the user.
    Type: Grant
    Filed: September 16, 2021
    Date of Patent: July 23, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: Anand Rudrabhatla, Jehuda Shemer, Abhinav Duggal
  • Patent number: 12039201
    Abstract: Host and accelerator devices can be coupled using various interfaces, such as Compute Express Link (CXL). In an example, user applications can have protected access to a shared set of control parameters for different queues. A protocol can allow an application to use a unique memory page at the accelerator device through which the application can access control parameters, such as can be used to control memory-based communication queues or other queues. In an example, there can be multiple sets of control parameters in a single memory page. The protocol can allow views of the single memory page from respective different application processes. In an example, the protocol can include or use an access check to detect and handle unauthorized accesses to particular parameters.
    Type: Grant
    Filed: August 30, 2022
    Date of Patent: July 16, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Tony M. Brewer, Michael Keith Dugan
  • Patent number: 12013790
    Abstract: Embodiments of apparatuses, methods, and systems for unified address translation for virtualization of input/output devices are described. In an embodiment, an apparatus includes first circuitry to use at least an identifier of a device to locate a context entry and second circuitry to use at least a process address space identifier (PASID) to locate a PASID-entry. The context entry is to include at least one of a page-table pointer to a page-table translation structure and a PASID. The PASID-entry is to include at least one of a first-level page-table pointer to a first-level translation structure and a second-level page-table pointer to a second-level translation structure. The PASID is to be supplied by the device. At least one of the apparatus, the context entry, and the PASID entry is to include one or more control fields to indicate whether the first-level page-table pointer or the second-level page-table pointer is to be used.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: June 18, 2024
    Assignee: Intel Corporation
    Inventors: Utkarsh Y. Kakaiya, Sanjay Kumar, Rajesh M. Sankaran, Philip R. Lantz, Ashok Raj, Kun Tian
  • Patent number: 12007891
    Abstract: Technology for enabling a kernel to perform data deduplication on encrypted storage of a container. An example method may involve: enabling, by a kernel, a guest program of a container to access a first storage block of a first container and a second storage block of a second container; receiving, by the kernel from the guest program, an indication that the first storage block and the second storage block are duplicate storage blocks; and updating the first storage block or the second storage block to cause the duplicate storage blocks to reference a common storage location.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: June 11, 2024
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 12001301
    Abstract: Hypervisor-independent reference copies of virtual machine payload data based on block-level pseudo-mount infrastructure and techniques are generated and stored in an illustrative data storage management system. An illustrative hypervisor-independent reference copy includes one or more virtual-machine payload data files that originated from a first virtual machine. The hypervisor-independent virtual-machine-payload reference copy is governed by a distinct reference copy policy that controls retention, storage, tiering, scheduling, etc. for the reference copy, independently of how the illustrative system treats other virtual machine payload data files originating from the same virtual machine.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: June 4, 2024
    Assignee: Commvault Systems, Inc.
    Inventor: Vinit Dilip Dhatrak
  • Patent number: 12001303
    Abstract: A system can maintain a first data center that comprises a virtualized overlay network and virtualized volume identifiers. The system can determine to perform a restore of data of the first data center to a second data center, the data comprising first instances of virtualized workloads. The system can transfer the data to the second data center. The system can configure the second data center with the virtualized overlay network and the virtualized volume identifiers. The system can operate the virtualized workloads on the second data center, the second instances of the virtualized workloads invoking the second instance of the virtualized overlay network and the second instance of the virtualized volume identifiers.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: June 4, 2024
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jehuda Shemer, Valerie Lotosh, Erez Sharvit
  • Patent number: 11979280
    Abstract: For a network control system that receives, from a user, logical datapath sets that logically express desired forwarding behaviors that are to be implemented by a set of managed switching elements, a controller for managing several managed switching elements that forward data in a network that includes the managed switching elements is described. The controller includes a set of modules for detecting a change in one or more managed switching elements and for updating logical datapath set based on the detected change. The logical datapath set is for subsequent translation into a set of physical forwarding behaviors of the managed switching elements.
    Type: Grant
    Filed: September 30, 2018
    Date of Patent: May 7, 2024
    Assignee: Nicira, Inc.
    Inventors: Martin Casado, Teemu Koponen, W. Andrew Lambeth, Pankaj Thakkar
  • Patent number: 11960357
    Abstract: Techniques for migrating virtual machines (VMs) in the presence of uncorrectable memory errors are provided. According to one set of embodiments, a source host hypervisor of a source host system can determine, for each guest memory page of a VM to be migrated from the source host system to a destination host system, whether the guest memory page is impacted by an uncorrectable memory error in a byte-addressable memory of the source host system. If the source host hypervisor determines that the guest memory page is impacted, the source host hypervisor can transmit a data packet to a destination host hypervisor of the destination host system that includes error metadata identifying the guest memory page as being corrupted. Alternatively, if the source host hypervisor determines that the guest memory page is not impacted, the source host hypervisor can attempt to read the guest memory page from the byte-addressable memory in a memory exception-safe manner.
    Type: Grant
    Filed: April 24, 2023
    Date of Patent: April 16, 2024
    Assignee: VMware LLC
    Inventors: Sowgandh Sunil Gadi, Rajesh Venkatasubramanian, Venkata Subhash Reddy Peddamallu, Arunachalam Ramanathan, Timothy P. Mann, Frederick Joseph Jacobs
  • Patent number: 11947991
    Abstract: A disclosed example includes accessing, by a backend block service driver in an input/output virtual machine executing on one or more processors, a first command submitted to a buffer by a paravirtualized input/output frontend block driver executing in a guest virtual machine; generating, by the backend block service driver, a translated command based on the first command by translating a virtual parameter of the first command to a physical parameter associated with a physical resource; submitting, by the backend block service driver, the translated command to an input/output queue to be processed by the physical resource based on the physical parameter; and submitting, by the backend block service driver, a completion status entry to the buffer, the completion status entry indicative of completion of a direct memory access operation that copies data between the physical resource and a guest memory buffer corresponding to the guest virtual machine.
    Type: Grant
    Filed: June 21, 2022
    Date of Patent: April 2, 2024
    Assignee: Intel Corporation
    Inventors: Yao Zu Dong, Yuankai Guo, Haozhong Zhang, Kun Tian
  • Patent number: 11907135
    Abstract: To increase the speed with which a Second Layer Address Table (SLAT) is traversed, memory having the same access permissions is contiguously arranged such that one or more hierarchical levels of the SLAT need not be referenced, thereby resulting in more efficient SLAT traversal. “Slabs” of memory are established whose memory range is sufficiently large that reference to a hierarchically lower level table can be skipped and a hierarchically higher level table's entries can directly identify relevant memory addresses. Such slabs are aligned to avoid smaller intermediate memory ranges. The loading of code or data into memory is performed based on a next available memory location within a slab having equivalent access permissions, or, if such a slab is not available, or if an existing slab does not have a sufficient quantity of available memory remaining, a new slab with the proper access permissions is established.
    Type: Grant
    Filed: February 6, 2023
    Date of Patent: February 20, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yevgeniy Bak, Mehmet Iyigun, Jonathan E. Lange
  • Patent number: 11907115
    Abstract: A system includes a memory, a processor in communication with the memory, a hypervisor, and a guest OS. The guest OS is configured to store a plurality of hints in a list at a memory location. Each hint includes an address value and the memory location of the list is included in one of the respective address values associated with the plurality of hints. The guest OS is also configured to pass the list to the hypervisor. Each address value points to a respective memory page of a plurality of memory pages including a first memory page and a last memory page. The hypervisor is configured to free the first memory page pointed to by a first hint of the plurality of hints and free the last memory page pointed to by a second hint of the plurality of hints. Additionally, the last memory page includes the list.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: February 20, 2024
    Assignee: RED HAT, INC.
    Inventor: Michael Tsirkin
  • Patent number: 11893408
    Abstract: A system includes a guest memory having guest physical pages (“GPPs”) that includes loan pages having a fixed quantity, a host memory, a processor in communication with the memory, and a virtual machine monitor (“VMM”). The VMM is configured to track a respective state (inflated or deflated) for each respective GPP. Additionally, the VMM is configured to track a respective status (in-use or unused) of each loan page, determine that each respective loan page is in-use, un-assign a first loan page from a corresponding GPP, discard the first loan page thereby changing the first loan page from in-use to unused, and assign the unused first loan page to a first GPP that is inflated, such that the first loan page's status updates to in-use. Each respective GPP having an inflated state is temporarily backed by the fixed quantity of loan pages.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: February 6, 2024
    Assignee: Red Hat, Inc.
    Inventor: David Hildenbrand
  • Patent number: 11886457
    Abstract: A transform-by-pattern (TBP) system is configured to proactively suggest relevant TBP programs based on inputted source dataset and target dataset without requiring users typing in examples. The TBP system has access to multiple TBP programs, each of which includes a combination of a source pattern, a target pattern, and a transformation program that is configured to transform data that fits into the target pattern into data that fits into the source pattern. When a source dataset and a target dataset are received from a user, the TBP system identifies a subset of the source dataset and a subset of the target dataset as related data. The TBP system then identifies one or more applicable TBP programs amongst the multiple TBP programs, and suggest or apply at least one of the one or more applicable TBP programs.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: January 30, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yeye He, Surajit Chaudhuri, Zhongjun Jin
  • Patent number: 11861401
    Abstract: A neural processing device and a method for job scheduling are provided. The neural processing device configured to receive, by an address space ID (ASID) manager, first and second requests from at least one context, respectively, and determine whether ASIDs are allocated, store jobs of contexts to which the ASIDs have not been allocated from the ASID manager in entities, schedule, by a job scheduler, an execution order of the jobs stored in the entities and cause the ASID manager to allocate the ASIDs to the contexts to which the ASIDs have not been allocated among the at least one context, and sequentially receive, by a command queue, jobs of contexts to which the ASIDs have been allocated, store the jobs as standby jobs, and sequentially execute the standby jobs.
    Type: Grant
    Filed: May 4, 2023
    Date of Patent: January 2, 2024
    Assignee: Rebellions Inc.
    Inventor: Seokju Yoon
  • Patent number: 11861414
    Abstract: Techniques are disclosed for implementing system calls in a virtualized computing environment. An interface is configured to abstract partitions in the virtualized computing environment. A system call is received that is to be executed across a system boundary in a localized computing environment. Based on a declarative policy, one or more of a device type, device path, or process identity associated with the system call is determined. The system call is executed in the virtualized computing environment.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: January 2, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Gerardo Diaz-Cuellar, Poornananda R. Gaddehosur, Vance P. O'Neill
  • Patent number: 11853227
    Abstract: There is provided a data processing apparatus and method of data processing. The data processing apparatus comprises storage circuitry to store a hierarchy of page tables comprising an intermediate level page table. Each entry of the intermediate level page table comprises base address information of a next level page table and control information indicating whether an addressing function has been applied to reorder physical storage locations of entries of the next level page table. Address translation circuitry is provided to perform address translations in response to receipt of a virtual address by performing a lookup in a next level page table dependent on the base address information and a page table index from the virtual address. When the control information indicates that the addressing function has been applied, the lookup is performed at a modified storage location generated by applying the addressing function to the page table index.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: December 26, 2023
    Assignee: Arm Limited
    Inventors: Charles Andrew Giefer, Alexander Donald Charles Chadwick
  • Patent number: 11853226
    Abstract: An apparatus has an address translation cache (12, 16) having a number of cache entries (40) for storing address translation data which depends on one or more page table entries of page tables. Control circuitry (50) is responsive to an invalidation request specifying address information to perform an invalidation lookup operation to identify at least one target cache entry to be invalidated. The target cache entry is an entry for which the corresponding address translation data depends on at least one target page table entry corresponding to the address information. The control circuitry (50) selects one of a number of invalidation lookup modes to use for the invalidation lookup operation in dependence on page size information indicating the page size of the target page table entry. The different invalidation lookup modes correspond to different ways of identifying the target cache entry based on the address information.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: December 26, 2023
    Assignee: Arm Limited
    Inventor: Andrew Brookfield Swaine
  • Patent number: 11836091
    Abstract: A processor supports secure memory access in a virtualized computing environment by employing requestor identifiers at bus devices (such as a graphics processing unit) to identify the virtual machine associated with each memory access request. The virtualized computing environment uses the requestor identifiers to control access to different regions of system memory, ensuring that each VM accesses only those regions of memory that the VM is allowed to access. The virtualized computing environment thereby supports efficient memory access by the bus devices while ensuring that the different regions of memory are protected from unauthorized access.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: December 5, 2023
    Assignees: Advanced Micro Devices, Inc., ATI TECHNOLOGIES ULC
    Inventors: Anthony Asaro, Jeffrey G. Cheng, Anirudh R. Acharya
  • Patent number: 11838295
    Abstract: Representative embodiments of operating a secured device requiring user authentication include receiving a request from a user for operating the device without prior authentication; granting the user temporary access to the device in accordance with a security policy that specifies a predetermined time interval and/or a predetermined number of device operations within which authentication must occur to continue at least some operations of the device; computationally storing an audit trail identifying the temporary access and actions performed during the temporary access; and upon determining that authentication has not been provided within the predetermined time interval or number of device operations, preventing at least some operations of the device and updating the audit trail to specify expiration of the temporary access.
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: December 5, 2023
    Assignee: Imprivata, Inc.
    Inventor: Meinhard Dieter Ullrich
  • Patent number: 11822526
    Abstract: Systems, methods, and machine-readable media to migrate data from source databases to target databases are disclosed. Data may be received, relating to the source databases and the target databases. For each source database, a migration assessment may be generated based on analyzing the data, and a migration method may be selected. A migration plan that specifies a parallel migration of a set of databases to the target databases may be created, with a first migration method to migrate a first subset of the set of databases and a second migration method to migrate a second subset of the set of databases. The parallel migration may be executed according to the migration plan may be caused so that the first subset of the set of databases is migrated with the first migration method while the second subset of the set of databases is migrated with the second migration method.
    Type: Grant
    Filed: January 20, 2022
    Date of Patent: November 21, 2023
    Assignee: Oracle International Corporation
    Inventors: Stephan Buehne, Elmar Spiegelberg
  • Patent number: 11816070
    Abstract: An example method for filesystem pass-through on lightweight virtual machine containers includes executing a container on a host, and creating a file system overlay in a local file system storage located on the host. The example method further includes copying files and directories into the file system overlay from a shared file system until the file system overlay is fully populated. The file system overlay is fully populated when all the files and directories from the shared file system are copied into the file system overlay. Once fully populated, completion is marked which indicates the file system overlay is fully populated, where marking the completion prevents accessing a read-only base image within the shared file system.
    Type: Grant
    Filed: April 11, 2022
    Date of Patent: November 14, 2023
    Assignee: Red Hat, Inc.
    Inventors: Sage Weil, Vincent Batts
  • Patent number: 11797327
    Abstract: A technique is described for managing processor (CPU) resources in a host having virtual machines (VMs) executed thereon. A target size of a VM is determined based on its demand and CPU entitlement. If the VM's current size exceeds the target size, the technique dynamically changes the size of a VM in the host by increasing or decreasing the number of virtual CPUs available to the VM. To “deactivate” virtual CPUs, a high-priority balloon thread is launched and pinned to one of the virtual CPUs targeted for deactivation, and the underlying hypervisor deschedules execution of the virtual CPU accordingly. To “activate” virtual CPUs, the number of virtual CPUs, the launched balloon thread may be killed.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: October 24, 2023
    Assignee: VMware, Inc.
    Inventor: Haoqiang Zheng
  • Patent number: 11789633
    Abstract: In some examples, collaborative learning-based cloud migration implementation may include identifying a migration agent that is to perform an application migration from a first cloud environment to a second cloud environment, and identifying a plurality of additional migration agents. A technical context and a migration flow context may be determined for the migration agent and for the plurality of additional migration agents. Executed allowed and error-response migration actions may be identified for states that are similar to a current state of the application migration, and a similarity between the migration agent and each of the migration agents that executed the allowed and error-response migration actions may be determined. A migration action that is to be performed may be identified based on a maximum relevance associated with the allowed and error-response migration actions. The identified migration action may be executed by the migration agent to perform the application migration.
    Type: Grant
    Filed: May 4, 2022
    Date of Patent: October 17, 2023
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Janardan Misra, Sanjay Mittal, Ravi Kiran Velama
  • Patent number: 11782745
    Abstract: Systems, methods, computer readable media and articles of manufacture consistent with innovations herein are directed to computer virtualization, computer security and/or hypervisor fingerprinting. According to some illustrative implementations, innovations herein may utilize and/or involve a separation kernel hypervisor which may include the use of a guest operating system virtual machine protection domain, a virtualization assistance layer, and/or a CPU ID instruction handler (which may be proximate in temporal and/or spatial locality to malicious code, but isolated from it). The CPU ID instruction handler may perform processing, inter alia, to return configurable values different from the actual values for the physical hardware. The virtualization assistance layer may further contain virtual devices, which when probed by guest operating system code, return the same values as their physical counterparts.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: October 10, 2023
    Assignee: Lynx Software Technologies, Inc.
    Inventor: Edward T. Mooring
  • Patent number: 11776598
    Abstract: An embodiment provides a data processing circuit and a device. The circuit includes: a first bank group 201 and a second bank group 202; a write circuit 203; and a read circuit 204. The write circuit 203 includes a write input cache circuit 2031, and is configured to: receive stored data from a write bus 206 through the write input cache circuit 2031, write the stored data into the first bank group 201 through a first read-write bus 207, and write the stored data into the second bank group 202 through a second read-write bus 208. The read circuit 204 includes a read output cache circuit 2041, and is configured to: read the stored data from the first bank group 201 through the first read-write bus 207, and read the stored data from the second bank group 202 through the second read-write bus 208.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: October 3, 2023
    Assignee: CHANGXIN MEMORY TECHNOLOGIES, INC.
    Inventor: Zequn Huang
  • Patent number: 11762695
    Abstract: Transparent memory management for over-subscribed accelerators is disclosed. A request from a remote initiator to execute a workload on a shared accelerator is received at a host system comprising the shared accelerator. A determination is made that there is insufficient physical memory of the accelerator to accommodate the request from the remote initiator. Responsive to determining that there is insufficient physical memory of the accelerator. An allocation of host system memory is requested for the remote initiator from the host system. A mapping between the remote initiator and the allocation of host system memory is then created.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: September 19, 2023
    Assignee: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.
    Inventors: Fred A. Bower, III, Caihong Zhang
  • Patent number: 11736566
    Abstract: Some embodiments provide a method of providing distributed storage services to a host computer from a network interface card (NIC) of the host computer. At the NIC, the method accesses a set of one or more external storages operating outside of the host computer through a shared port of the NIC that is not only used to access the set of external storages but also for forwarding packets not related to an external storage. In some embodiments, the method accesses the external storage set by using a network fabric storage driver that employs a network fabric storage protocol to access the external storage set. The method presents the external storage as a local storage of the host computer to a set of programs executing on the host computer. In some embodiments, the method presents the local storage by using a storage emulation layer on the NIC to create a local storage construct that presents the set of external storages as a local storage of the host computer.
    Type: Grant
    Filed: January 9, 2021
    Date of Patent: August 22, 2023
    Assignee: VMWARE, INC.
    Inventors: Shoby A. Cherian, Anjaneya P. Gondi, Janakiram Vantipalli, Raghavendra Subbarao Narahari Venkata, Vamshi Tangudu
  • Patent number: 11733902
    Abstract: Local memory and disaggregated memory may be identified and monitored for integrating disaggregated memory in a computing system. Candidate data may be migrated between the local memory and disaggregated memory to optimize allocation of disaggregated memory and migrated data according to a dynamic set of migration criteria.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: August 22, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Panagiotis Koutsovasilis, Michele Gazzetti, Christian Pinto
  • Patent number: 11734049
    Abstract: Apparatuses and methods related to managing regions of memory are described. Managing regions can include verifying whether an access command is authorized to access a particular region of a memory array, which may have some regions that have rules or restrictions governing access (e.g., so-called “protected regions”). The authorization can be verified utilizing a key and a memory address corresponding to the access command. If an access command is authorized to access a region, then a row of the memory array corresponding to the access command can be activated. If an access command is not authorized to access the region, then a row of the memory array corresponding to the access command may not be activated.
    Type: Grant
    Filed: September 20, 2021
    Date of Patent: August 22, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Brent Keeth, Naveh Malihi
  • Patent number: 11720384
    Abstract: A method is provided in a data processing system having second level address translation (SLAT) controlled by a hypervisor. In the method, hashes of all memory pages accessible by a guest OS are stored (set S). Also, hashes of all memory pages previously accessed by the guest OS are stored (set T). When the guest OS attempts an access to a memory page having executable code for which it does not have permission, an exception is generated. A hash of the memory page is compared with the hashes of set T and set S. If there is not a match within set T, then the guest OS has never attempted the requested operation before and suspicious behavior is reported. If there is not a match within set S, the requested operation is reported as illegal. In another embodiment, the memory page may be encrypted to prevent the guest OS from reading the memory page.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: August 8, 2023
    Assignee: NXP B.V.
    Inventor: Jan Hoogerbrugge
  • Patent number: 11669445
    Abstract: A method performed by a slave device to obtain a host memory address includes: inquiring a description list to obtain information of an allocated memory of a host; dividing the allocated memory into N storage spaces according to the information; using a first memory space of the N storage spaces to store a first level look-up table indicating physical addresses of the N storage spaces; dividing the first memory space into M storage spaces; storing a second level look-up table in the slave device to indicate physical addresses of the M storage spaces; inquiring the second level look-up table according to a logical address and obtaining a first index indicating a physical address of one of the M storage spaces; and inquiring the first level look-up table according to the first index and obtaining a second index indicating a physical address of one of the N storage spaces.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: June 6, 2023
    Assignee: REALTEK SEMICONDUCTOR CORPORATION
    Inventors: Shi-Yao Zhao, Dao-Fu Wang, Yong-Peng Jing
  • Patent number: 11656916
    Abstract: Techniques for scalable virtualization of an Input/Output (I/O) device are described. An electronic device composes a virtual device comprising one or more assignable interface (AI) instances of a plurality of AI instances of a hosting function exposed by the I/O device. The electronic device emulates device resources of the I/O device via the virtual device. The electronic device intercepts a request from the guest pertaining to the virtual device, and determines whether the request from the guest is a fast-path operation to be passed directly to one of the one or more AI instances of the I/O device or a slow-path operation that is to be at least partially serviced via software executed by the electronic device. For a slow-path operation, the electronic device services the request at least partially via the software executed by the electronic device.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: May 23, 2023
    Assignee: Intel Corporation
    Inventors: Utkarsh Y. Kakaiya, Rajesh Sankaran, Sanjay Kumar, Kun Tian, Philip Lantz
  • Patent number: 11656981
    Abstract: Methods and systems related to memory reduction in a system by oversubscribing physical memory shared among compute entities are provided. A portion of the memory includes a combination of a portion of a first physical memory of a first type and a logical pooled memory associated with the system. A logical pooled memory controller is configured to: (1) track both a status of whether a page of the logical pooled memory allocated to any of the plurality of compute entities is a known-pattern page and a relationship between logical memory addresses and physical memory addresses associated with any allocated logical pooled memory, and (2) allow the write operation to write data to any available space in the second physical memory of the first type only up to an extent of physical memory that corresponds to the portion of the logical pooled memory previously allocated to the compute entity.
    Type: Grant
    Filed: August 4, 2022
    Date of Patent: May 23, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Monish Shantilal Shah, Lisa Ru-Feng Hsu, Daniel Sebastian Berger
  • Patent number: 11651085
    Abstract: A processor executes an untrusted VMM that manages execution of a guest workload. The processor also populates an entry in a memory ownership table for the guest workload. The memory ownership table is indexed by an original hardware physical address, the entry comprises an expected guest address that corresponds to the original hardware physical address, and the entry is encrypted with a key domain key. In response to receiving a request from the guest workload to access memory using a requested guest address, the processor (a) obtains, from the untrusted VMM, a hardware physical address that corresponds to the requested guest address; (b) uses that physical address as an index to find an entry in the memory ownership table; and (c) verifies whether the expected guest address from the found entry matches the requested guest address. Other embodiments are described and claimed.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: May 16, 2023
    Assignee: Intel Corporation
    Inventors: David M. Durham, Siddhartha Chhabra, Ravi L. Sahita, Barry E. Huntley, Gilbert Neiger, Gideon Gerzon, Baiju V. Patel
  • Patent number: 11640361
    Abstract: According to one or more embodiments of the present invention, a computer implemented method includes receiving a secure access request for a secure page of memory at a secure interface control of a computer system. The secure interface control can check a disable virtual address compare state associated with the secure page. The secure interface control can disable a virtual address check in accessing the secure page to support mapping of a plurality of virtual addresses to a same absolute address to the secure page based on the disable virtual address compare state being set and/or to support secure pages that are accessed using an absolute address and do not have an associated virtual address.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: May 2, 2023
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Lisa Cranton Heller, Jonathan D. Bradbury
  • Patent number: 11640318
    Abstract: Logic includes a task builder for building tasks comprising data items, a task scheduler for scheduling tasks for processing by a parallel processor, a data store arranged to map content of each data item to an item ID, and a linked-list RAM comprising an entry for each item ID. For each new data item, the task builder creates a new task by starting a new linked list, or adds the data item to an existing linked list. In each linked list, the entry for each data item records a pointer to a next item ID in the list. The task builder indicates when any of the tasks is ready for scheduling. The task scheduler identifies a ready task based on the indication from the task builder, and in response follows the pointers in the respective linked list in order to schedule the data items of the task for processing.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: May 2, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Alistair Goudie, Panagiotis Velentzas
  • Patent number: 11625505
    Abstract: The disclosed technology is generally directed to network security for processors. In one example of the technology, a device includes: hardware, including a network interface; a memory; and a processor. The memory is adapted to store run-time data for the device. The memory includes at least a first memory region and a second memory region. The processor that is adapted to execute processor-executable code including a first binary in the first memory region and a second binary in the second memory region. The first binary includes at least one application and a kernel. The kernel is configured to control the hardware. The second binary is configured to operate, upon execution, as a network stack. The device is configured such that the first memory region is protected such that the first memory region is inaccessible to the second binary.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: April 11, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mark Russinovich, Galen Clyde Hunt
  • Patent number: 11620079
    Abstract: A storage system includes a controller and block-based storage device(s) that include LSA blocks arranged as a log structured array (LSA). The controller creates a first and second LUN, each including LBA locations. The controller assigns the first LUN to a first host and the second LUN to a second host, accumulates first host data associated with a first LBA location of the first LUN, writes a block-size worth of such data to a first LSA block, and maps the first LBA location to the first LSA block. In response to receiving an ODX Copy Offload instruction, the controller determines the first host data should be migrated to a target LBA location in the second LUN, determines the first and second LUN are exclusively mappable to the same LSA, maps the target LBA location to the first LSA block, and unmaps the first LBA location from the first LSA block.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: April 4, 2023
    Assignee: International Business Machines Corporation
    Inventors: Wendy Lyn Henson, Robert E. Jose, Kushal S. Patel, Sarvesh S. Patel
  • Patent number: 11609860
    Abstract: In various embodiments, a computing system includes, for example, a plurality of processing units that share access to a system cache. A cache management application receives, for example, resource savings information for each processing unit. The resource savings information indicates, for example, amounts of a resource (e.g., power) that are saved when different units of the system cache are allocated to a processing unit. The cache management application determines, for example, the number of units of system cache to allocate to each processing unit based on the received resource savings information.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: March 21, 2023
    Assignee: NVIDIA CORPORATION
    Inventor: Arnab Banerjee
  • Patent number: 11604669
    Abstract: Systems and methods are provided for efficiently configuring an execution environment for an on-demand code execution system to handle a single request (or session) for a single user. Once the session or request is complete, the execution environment is reset, such as by having the hardware processor state, memory, and storage reset. In particular, prior to the execution of code, state of the execution environment of the host computing device is retrieved, such as hardware processor(s), memory, and/or storage state. Moreover, during execution of the code instructions, intermediate state can be gathered. Following the execution of the code, the execution environment is reset based on the saved state related to the hardware processor(s), memory, and/or storage. A subsequent code execution securely occurs in the execution environment and the execution environment is reset again, and so forth.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: March 14, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Marc Brooker, Mikhail Danilov, Osman Surkatty, Tao Chen
  • Patent number: 11599435
    Abstract: A failure analysis system identifies a root cause of a failure (or other health issue) in a virtualized computing environment and provides a recommendation for remediation. The failure analysis system uses a model-based reasoning (MBR) approach that involves building a model describing the relationships/dependencies of elements in the various layers of the virtualized computing environment, and the model is used by an inference engine to generate facts and rules for reasoning to identify an element in the virtualized computing environment that is causing the failure. Then, then the failure analysis system uses a decision tree analysis (DTA) approach to perform a deep diagnosis of the element, by traversing a decision tree that was generated by combining the rules for reasoning provided by the MBR approach, in conjunction with examining data collected by health monitors. The result of the DTA approach is then used to generate the recommendation for remediation.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: March 7, 2023
    Assignee: VMWARE, INC.
    Inventors: Yu Wu, Yang Yang, Xiang Yu, Wenguang Wang, Jin Feng
  • Patent number: 11593186
    Abstract: A technique is introduced for applying multi-level caching to deploy various types of physical memory to service captured memory calls from an application. The various types of physical memory can include local volatile memory (e.g., dynamic random-access memory), local persistent memory, and/or remote persistent memory. In an example embodiment, a user-space page fault notification mechanism is used to defer assignment of actual physical memory resources until a memory buffer is accessed by the application. After populating a selected physical memory in response to an initial user-space page fault notification, page access information can be monitored to determine which pages continues to be accessed and which pages are inactive to identify candidates for eviction.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: February 28, 2023
    Assignee: MEMVERGE, INC.
    Inventors: Ronald S. Niles, Yue Li
  • Patent number: 11580019
    Abstract: Techniques for computer memory management are disclosed herein. In one embodiment, a method includes in response to receiving a request for allocation of memory, determining whether the request is for allocation from a first memory region or a second memory region of the physical memory. The first memory region has first memory subregions of a first size and the second memory region having second memory subregions of a second size larger than the first size of the first memory region. The method further includes in response to determining that the request for allocation of memory is for allocation from the first or second memory region, allocating a portion of the first or second multiple memory subregions of the first or second memory region, respectively, in response to the request.
    Type: Grant
    Filed: April 17, 2020
    Date of Patent: February 14, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yevgeniy M. Bak, Kevin Michael Broas, David Alan Hepkin, Landy Wang, Mehmet Iyigun, Brandon Alec Allsop, Arun U. Kishan
  • Patent number: 11573906
    Abstract: To increase the speed with which a Second Layer Address Table (SLAT) is traversed, memory having the same access permissions is contiguously arranged such that one or more hierarchical levels of the SLAT need not be referenced, thereby resulting in more efficient SLAT traversal. “Slabs” of memory are established whose memory range is sufficiently large that reference to a hierarchically lower level table can be skipped and a hierarchically higher level table's entries can directly identify relevant memory addresses. Such slabs are aligned to avoid smaller intermediate memory ranges. The loading of code or data into memory is performed based on a next available memory location within a slab having equivalent access permissions, or, if such a slab is not available, or if an existing slab does not have a sufficient quantity of available memory remaining, a new slab with the proper access permissions is established.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: February 7, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yevgeniy Bak, Mehmet Iyigun, Jonathan E. Lange
  • Patent number: 11561894
    Abstract: Techniques for enabling efficient guest OS access to PCIe configuration space are provided. In one set of embodiments, a hypervisor can reserve a single host physical memory page in the host physical memory of a host system and can populate the single host physical memory page with a value indicating non-presence of PCIe device functions. The hypervisor can then create, for each guest physical memory page in a guest physical memory of a virtual machine (VM) corresponding to a PCIe configuration space of an absent PCIe device function in the VM, a mapping in the hypervisor's second-level page tables that maps the guest physical memory page to the single host physical memory page.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: January 24, 2023
    Assignee: VMware, Inc.
    Inventors: Andrei Warkentin, Alexander Fainkichen, Ye Li, Regis Duchesne, Cyprien Laplace, Shruthi Hiriyuru, Sunil Kotian
  • Patent number: 11558311
    Abstract: At a first compute instance run on a virtualization host, a local instance scaling manager is launched. The scaling manager determines, based on metrics collected at the host, that a triggering condition for redistributing one or more types of resources of the first compute instance has been met. The scaling manager causes virtualization management components to allocate a subset of the first compute instance's resources to a second compute instance at the host.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: January 17, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Andra-Irina Paraschiv, Matthew Shawn Wilson
  • Patent number: 11550729
    Abstract: Systems and methods for encryption support for virtual machines. An example method may comprise maintaining, by a virtual machine running on a host computer system, a list of free memory pages, wherein each entry in the list references a set of memory pages that are contiguous in a guest address space; receiving, from a hypervisor of the host computer system, a request for guest memory to be made available to the hypervisor, wherein the request comprises a minimum size of guest memory requested and a maximum size of guest memory; and responsive to identifying, in the list of free memory pages, a set of contiguous guest memory pages that is greater than or equal to the minimum size of memory requested, and less than or equal to the maximum size of memory requested, releasing the set of contiguous guest memory pages to the hypervisor.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: January 10, 2023
    Assignee: Red Hat, Inc.
    Inventors: Michael Tsirkin, David Hildenbrand
  • Patent number: 11543976
    Abstract: Techniques for reducing unsafe memory access, particularly when interacting with native libraries, are disclosed. The system may receive a memory address. The system may determine that the received memory address is not associated with an existing memory segment. The system selects a particular memory segment, of a plurality of memory segments. The memory segment may have a length of zero and a size corresponding to a size of a native heap. The system may return a reference to the particular memory segment.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: January 3, 2023
    Assignee: Oracle International Corporation
    Inventors: Maurizio Cimadamore, James Malcolm Laskey, Jorn Bender Vernee, Vladimir Vitalyevich Ivanov
  • Patent number: RE49601
    Abstract: A cloud system data management method for alleviate a data leakage problem occurring when a user accessed by another user when a virtual data volume of the user is mounted to a virtual machine of another user includes creating a first virtual machine for a user and allocating a virtual data volume to the first virtual machine, setting an identifier of the virtual data volume as an identifier corresponding to a home identifier of the first virtual machine, determining, according to the identifier of the virtual data volume and a home identifier of a second virtual machine, whether the virtual data volume and the second virtual machine belong to a same user when the virtual data volume needs to be mounted to the second virtual machine, forbidding the virtual data volume to be mounted to the second virtual machine when they do not belong to the same user.
    Type: Grant
    Filed: March 18, 2021
    Date of Patent: August 8, 2023
    Assignee: HUAWEI CLOUD COMPUTING TECHNOLOGIES CO., LTD.
    Inventor: Sihai Ye