Abstract: An apparatus is described that contains a processing core comprising a CPU core and at least one accelerator coupled to the CPU core. The CPU core comprises a pipeline having a translation look aside buffer. The CPU core comprising logic circuitry to set a lock bit in attribute data of an entry within the translation look-aside buffer entry to lock a page of memory reserved for the accelerator.
Abstract: Techniques for using a host-side cache to accelerate virtual machine (VM) I/O are provided. In one embodiment, the hypervisor of a host system can intercept an I/O request from a VM running on the host system, where the I/O request is directed to a virtual disk residing on a shared storage device. The hypervisor can then process the I/O request by accessing a host-side cache that resides one or more cache devices distinct from the shared storage device, where the accessing of the host-side cache is transparent to the VM.
Type:
Grant
Filed:
June 20, 2013
Date of Patent:
September 27, 2016
Assignee:
VMware, Inc.
Inventors:
Thomas A. Phelan, Mayank Rawat, Deng Liu, Kiran Madnani, Sambasiva Bandarupalli
Abstract: Apparatuses, systems, and a method for providing a processor architecture with data prefetching are described. In one embodiment, a system includes one or more processing units that include a first type of in-order pipeline to receive at least one data prefetch instruction. The one or more processing units include a second type of in-order pipeline having issues slots to receive instructions and a data prefetch queue to receive the at least one data prefetch instruction. The data prefetch queue may issue the at least one data prefetch instruction to the second type of in-order pipeline based upon one or more factors (e.g., at least one execution slot of the second type of in-order pipeline being available, priority of the data prefetch instruction).
Abstract: An embodiment of an apparatus for securing program code stored in a non-volatile memory is introduced. A non-volatile memory contains a first region and a second region. Two NVMMCS (non-volatile memory management controllers respectively coupled to the two regions. A programming command-and-address decoder is coupled to the NVMMCS. The programming command-and-address decoder instructs the first NVMMC to erase data from the first region when receiving a command to erase the first region via a programming interface, and instructs the second NVMMC to erase data from the second region when receiving a command to erase the second region via the programming interface.
Abstract: Technologies for supporting large pages in hardware prefetchers are described. A processor includes a processor core comprising a pipeline, cache memory and a hardware prefetcher coupled to the processor core and the cache memory. The hardware prefetcher is a region-based hardware prefetcher to track memory regions of a predefined region size that is defined by software to be executed by the processor. The hardware prefetcher is operative to receive incoming requests and track different memory regions of predefined size with multiple streams in a stream table with stream entries. The hardware prefetcher generates a prefetch request and determines whether the prefetch request goes beyond a page boundary of the one memory region. The hardware prefetcher creates a new stream entry to track a successive memory region when the prefetch request goes beyond the page boundary of the one memory region, allowing subsequent prefetch requests to the successive memory region.
Abstract: The first storage area stores original data of an update target that is to be updated by a host. The controller divides data to be written over the original data of the update target stored in the first storage area into a plurality of pieces of update data and thereby distributes the plurality of pieces of update data for each of successive addresses. The second storage area stores the plurality of update data distributed by the controller. The third storage area stores information in which an update area address, which is an address of the first storage area to be overwritten by the plurality of pieces of update data of the original data of the update target, is associated with a storage destination address, which is an address of the second storage area that has stored the plurality of pieces of update data.
Abstract: A memory heap management facility is provided that is able to perform various management tasks, including, but not limited to, garbage collection, compaction, and/or re-ordering of objects within a heap. One or more of these management tasks improve system performance by limiting movement of pages in and out of virtual memory. The garbage collection technique selectively performs garbage collection such that certain objects, such as old but live, infrequently referenced objects, are not garbage collected each time garbage collection is performed.
Type:
Grant
Filed:
October 22, 2014
Date of Patent:
August 30, 2016
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Abstract: Multiple applications request data from multiple storage units over a computer network. The data is divided into segments and each segment is distributed randomly on one of several storage units, independent of the storage units on which other segments of the media data are stored. At least one additional copy of each segment also is distributed randomly over the storage units, such that each segment is stored on at least two storage units. This random distribution of multiple copies of segments of data improves both scalability and reliability. When an application requests a selected segment of data, the request is processed by the storage unit with the shortest queue of requests. Random fluctuations in the load applied by multiple applications on multiple storage units are balanced nearly equally over all of the storage units.
Type:
Grant
Filed:
October 5, 2015
Date of Patent:
August 30, 2016
Assignee:
Avid Technology, Inc.
Inventors:
Eric C. Peters, Stanley Rabinowitz, Herbert R. Jacobs, Peter J. Fasciano
Abstract: In response to receiving a selection to override an existing memory allocation of one or more regions of an external memory device within a memory register for a particular bridge from among multiple bridges within an integrated circuit, wherein the multiple bridges connect to a shared physical memory channel to the external memory device, a remap controller of the particular bridge reads, from a super rank register, one or more super rank values specifying one or more relocation regions of the external memory device connected to an interface of the integrated circuit. The remap controller remaps the memory register for the particular bridge with the one or more super rank values specified in the super rank register to relocate memory accesses by the bridge to the one or more relocation regions of the external memory device.
Type:
Grant
Filed:
October 23, 2014
Date of Patent:
August 23, 2016
Assignee:
GLOBALFOUNDRIES, Inc.
Inventors:
Robert M. Dinkjian, Brian Flachs, Michael Y. Lee, Bill N. On
Abstract: An electronic device includes a semiconductor memory unit. The semiconductor memory unit includes a plurality of first lines extending in a first direction, a plurality of second lines extending in a second direction intersecting the first direction, and a plurality of variable resistance patterns that is positioned at intersections of the first lines and the second lines and disposed between the first lines and the second lines in a vertical direction. Each of the variable resistance patterns has an elongated shape in a plan view and a portion of each of the variable resistance patterns is disposed outside a region in which a corresponding first line and a corresponding second line overlap with each other.
Type:
Grant
Filed:
October 24, 2014
Date of Patent:
August 16, 2016
Assignee:
SK HYNIX INC.
Inventors:
Kyoo-Ho Jung, Byung-Jick Cho, Jong-Chul Lee, Won-Ki Ju
Abstract: In one embodiments, one or more first computing devices receive updated values for user data associated with a plurality of users; and for each of the user data for which an updated value has been received, determine one or more second systems that each have subscribed to be notified when the value of the user datum is updated and each have a pre-established relationship with the user associated with the user datum; and push notifications to the second systems indicating that the value of the user datum has been updated without providing the updated value for the user datum to the second systems.
Type:
Grant
Filed:
October 20, 2015
Date of Patent:
August 9, 2016
Assignee:
Facebook, Inc.
Inventors:
Wei Zhu, Ray C. He, Luke Jonathan Shepard
Abstract: In the computer system, a storage system provides a storage level virtual volume based on thin provisioning technology, to a physical server on which a virtual machine is defined. The storage system releases the area of the logical volume corresponding to the storage level virtual volume accessed by a virtual machine which is specified to be deleted, on the basis of storage level virtual volume conversion information which is managed by the storage system.
Abstract: A memory subsystem incorporating a die-stacked DRAM (DSDRAM) is disclosed. In one embodiment, a system include a processor implemented on a silicon interposer of an integrated circuit (IC) package, a DSDRAM coupled to the processor, the DSDRAM implemented on the silicon interposer of the IC package, and a DRAM implemented separately from the IC package. The DSDRAM and the DRAM form a main memory having a contiguous address space comprising a range of physical addresses. The physical addresses of the DSDRAM occupy a first contiguous portion of the address space, while the DRAM occupies a second contiguous portion of the address space. Each physical address of the contiguous address space is augmented with a first bit that, when set, indicates that a page is stored in the DRAM and the DSDRAM.
Type:
Grant
Filed:
March 27, 2014
Date of Patent:
August 2, 2016
Assignee:
Oracle International Corporation
Inventors:
Jee Ho Ryoo, Karthik Ganesan, Yao-Min Chen
Abstract: An electronic control apparatus includes: a plurality of processing devices each having a rewritable non-volatile memory and each executing a predetermined process in accordance with stored data that is stored in the non-volatile memory, wherein one of the processing devices extracts, from rewriting data transmitted from an external rewriting apparatus and including a plurality of individual rewriting data each corresponding to one of a plurality of dedicated address ranges, one of the individual rewriting data which corresponds to one of the dedicated address ranges that is individually allocated in advance for the one of the processing devices, and rewrites the stored data that is stored in the non-volatile memory of the one of the processing devices by using the one of the individual rewriting data.
Abstract: A method, including monitoring, by a remapping manager, a system state of a computing device for the occurrence of a predefined event, detecting, by the remapping manager, the occurrence of the predefined event, and initiating, by the remapping manager upon the detection of the predefined event, a remapping of first encoded addresses stored in tags, the first encoded addresses are associated with locations in main memory that are cached in a memory cache.
Abstract: The system, process, and methods herein describe a mechanism for extracting virtual machine disk backups from LUN backups. The virtual machine disk backups may be stored in a deduplicated storage system. Thick virtual machine disks may be converted to thin virtual machine disks.
Type:
Grant
Filed:
March 27, 2014
Date of Patent:
July 12, 2016
Assignee:
EMC CORPORATION
Inventors:
Assaf Natanzon, Saar Cohen, Anestis Panidis
Abstract: Described systems and methods allow conducting computer security operations, such as detecting malware and spyware, in a bare-metal computer system. In some embodiments, a first processor of a computer system executes the code samples under assessment, whereas a second, distinct processor is used to carry out the assessment and to control various hardware components involved in the assessment. Such hardware components include, among others, a memory shadower configured to detect changes to a memory connected to the first processor, and a storage shadower configured to detect an attempt to write to a non-volatile storage device of the computer system.
Abstract: Memories, buffered write command circuits, and methods for executing memory commands in a memory. In some embodiments, read commands that are received after write commands are executed internally prior to executing the earlier received write commands. Write commands are buffered so that the commands can be executed upon completion of the later received read command. One example of a buffered write command circuit includes a write command buffer to buffer write commands and propagate buffered write commands therethrough in response to a clock signal and further includes write command buffer logic. The write command buffer logic generates an active clock signal to propagate the buffered write commands through the write command buffer for execution, suspends the active clock signal in response to receiving a read command after the write command is received, and restarts the active clock upon completion of the later received read command.
Type:
Grant
Filed:
August 2, 2012
Date of Patent:
June 28, 2016
Assignee:
Micron Technology, Inc.
Inventors:
Todd D. Farrell, Jeffrey P. Wright, Victor Wong, Alan J. Wilson
Abstract: Embodiments disclosed herein provide a high performance content delivery system in which versions of content are cached for servicing web site requests containing the same uniform resource locator (URL). When a page is cached, certain metadata is also stored along with the page. That metadata includes a description of what extra attributes, if any, must be consulted to determine what version of content to serve in response to a request. When a request is fielded, a cache reader consults this metadata at a primary cache address, then extracts the values of attributes, if any are specified, and uses them in conjunction with the URL to search for an appropriate response at a secondary cache address. These attributes may include HTTP request headers, cookies, query string, and session variables. If no entry exists at the secondary address, the request is forwarded to a page generator at the back-end.
Abstract: Systems and methods are disclosed for reducing or eliminating address lines that need to be routed to multiple related embedded memory blocks. In particular, one or more inputs are added to a block Random Access Memory (RAM) such that when one or more of the inputs are asserted, the address input to the Block RAM may be incremented prior to being used to retrieve data contents of the block RAM. Thus, if address <addr> is provided to the block RAM and the address increment signal is asserted, data may be read from location <addr+N> instead of <addr>, where N may be an integer. Block RAMs with such address arithmetic may be used to implement wide First-In-First-Out (FIFO) queues, wide memories, and/or data-burst accessible block RAMs.