Patents Examined by Yong Choe
-
Patent number: 9934145Abstract: In one embodiment of the present invention a cache unit organizes data stored in an attached memory to optimize accesses to compressed data. In operation, the cache unit introduces a layer of indirection between a physical address associated with a memory access request and groups of blocks in the attached memory. The layer of indirection—virtual tiles—enables the cache unit to selectively store compressed data that would conventionally be stored in separate physical tiles included in a group of blocks in a single physical tile. Because the cache unit stores compressed data associated with multiple physical tiles in a single physical tile and, more specifically, in adjacent locations within the single physical tile, the cache unit coalesces the compressed data into contiguous blocks. Subsequently, upon performing a read operation, the cache unit may retrieve the compressed data conventionally associated with separate physical tiles in a single read operation.Type: GrantFiled: October 28, 2015Date of Patent: April 3, 2018Assignee: NVIDIA CorporationInventors: Praveen Krishnamurthy, Peter B. Holmquist, Wishwesh Gandhi, Timothy Purcell, Karan Mehra, Lacky Shah
-
Patent number: 9933941Abstract: A memory system includes a nonvolatile memory including a plurality of blocks as data erase units, a measuring unit which measures an erase time at which data of each block is erased, and a block controller which writes data supplied from at least an exterior into a first block which is set in a free state and whose erase time is oldest.Type: GrantFiled: September 20, 2016Date of Patent: April 3, 2018Assignee: TOSHIBA MEMORY CORPORATIONInventors: Kazuya Kitsunai, Shinichi Kanno, Hirokuni Yano, Toshikatsu Hida, Junji Yano
-
Patent number: 9933968Abstract: A system and method for adapting a secure application execution environment to support multiple configurations includes determining a maximum configuration for the secure application execution environment, determining an optimal configuration for the secure application environment, and, at load time, configuring the secure application execution environment for the optimal configuration.Type: GrantFiled: April 30, 2015Date of Patent: April 3, 2018Assignee: Intel CorporationInventor: Bin Xing
-
Patent number: 9921834Abstract: Discontiguous storage locations are prefetched by a prefetch instruction. Addresses of the discontiguous storage locations are provided by a list directly or indirectly specified by a parameter of the prefetch instruction, along with metadata and information about the list entries. Fetching of corresponding data blocks to cache lines is initiated. A processor may enter transactional execution mode and memory instructions of a program may be executed using the prefetched data blocks.Type: GrantFiled: February 15, 2017Date of Patent: March 20, 2018Assignee: International Business Machines CorporationInventors: Fadi Y. Busaba, Dan F. Greiner, Michael Karl Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Timothy J. Slegel
-
Patent number: 9921957Abstract: A method is performed at an electronic device with a display, one or more processors, volatile memory, and non-volatile memory that stores one or more programs for execution by the one or more processors. The method includes periodically comparing an amount of free volatile memory to a threshold level. The amount of free volatile memory is compared to the threshold level with a first periodicity when the display is off and with a second periodicity that is shorter than the first periodicity when the display is on. The method also includes, in response to a determination that the amount of free volatile memory does not satisfy the threshold level, deallocating volatile memory by terminating one or more processes based on priority levels of the one or more processes.Type: GrantFiled: August 21, 2017Date of Patent: March 20, 2018Assignee: FACEBOOK, INC.Inventors: Dung Nguyen Tien, Fraidun Akhi, Jonathan Cook
-
Patent number: 9921872Abstract: In a transactional memory environment, a computer-implemented method includes receiving one or more memory locations and broadcasting, by a first processor to one or more additional processors, a cross-interrogate. The cross-interrogate includes the one or more memory locations. The computer-implemented method further includes, by the one or more additional processors, receiving the cross-interrogate, not aborting any current transaction based on the cross-interrogate, and generating an indication. The indication comprises whether the one or more memory locations is in use for the current transaction by that of the one or more additional processors. The computer-implemented method further includes sending the indication from each of the one or more additional processors to the first processor and, by the first processor, combining each indication from the one or more additional processors to yield a status code and returning the status code.Type: GrantFiled: May 26, 2016Date of Patent: March 20, 2018Assignee: International Business Machines CorporationInventors: Dan F. Greiner, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Patent number: 9916102Abstract: A technique for managing storage space in a data storage system generates liability values on a per-family basis, with each family including files in the file system that are related to one another by snapping. Each family thus groups together files in the file system that share at least some blocks among one another based on snapshot activities. Distinct files that do not share blocks based on snapping are provided in separate families. The file system leverages the snap-based relationships among family members to produce more accurate estimates of liability than would otherwise be feasible.Type: GrantFiled: June 29, 2016Date of Patent: March 13, 2018Assignee: EMC IP Holding Company LLCInventors: Ivan Bassov, Walter C. Forrester, Michal Marko, Ahsan Rashid
-
Patent number: 9910780Abstract: Embodiments herein pre-load memory translations used to perform virtual to physical memory translations in a computing system that switches between virtual machines (VMs). Before a processor switches from executing the current VM to the new VM, a hypervisor may retrieve previously saved memory translations for the new VM and load them into cache or main memory. Thus, when the new VM begins to execute, the corresponding memory translations are in cache rather than in storage. Thus, when these memory translations are needed to perform virtual to physical address translations, the processor does not have to wait to pull the memory translations for slow storage devices (e.g., a hard disk drive).Type: GrantFiled: October 28, 2015Date of Patent: March 6, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Shakti Kapoor
-
Patent number: 9910712Abstract: In an example, a method of replication between computing systems includes replicating virtual machine files from primary storage in a primary computing system to secondary storage in a secondary computing system. The virtual machine files implement a plurality of virtual machines in the primary computing system and a plurality of replica virtual machines in the secondary computing system. The method further includes replicating configuration data, from virtualization software in the primary computing system to secondary virtualization software installed on a host computer in the secondary computing system, through a platform management system in the host computer while the host computer is in a low-power state.Type: GrantFiled: June 15, 2015Date of Patent: March 6, 2018Assignee: VMware, Inc.Inventor: Jinto Antony
-
Patent number: 9904569Abstract: Embodiments herein pre-load memory translations used to perform virtual to physical memory translations in a computing system that switches between virtual machines (VMs). Before a processor switches from executing the current VM to the new VM, a hypervisor may retrieve previously saved memory translations for the new VM and load them into cache or main memory. Thus, when the new VM begins to execute, the corresponding memory translations are in cache rather than in storage. Thus, when these memory translations are needed to perform virtual to physical address translations, the processor does not have to wait to pull the memory translations for slow storage devices (e.g., a hard disk drive).Type: GrantFiled: January 4, 2016Date of Patent: February 27, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Shakti Kapoor
-
Patent number: 9898477Abstract: A method, article of manufacture, and apparatus for providing a site cache manager is discussed. Data objects may be read from a site cache rather than an authoritative object store. This provides performance benefits when a client reading the data has a better connection to the site cache than to the authoritative object store. The site cache manager controls the volume of stored data on the site cache to enhance performance by increasing the frequency of data object being read from or written to the site cache rather than the authoritative object store.Type: GrantFiled: September 24, 2015Date of Patent: February 20, 2018Assignee: EMC IP HOLDING COMPANY LLCInventor: Vijay Panghal
-
Patent number: 9898219Abstract: In one aspect, a method includes associating disk devices with containers based on a policy, allocating a disk device to a container based on the policy and allowing access to the disk device from the container. In another aspect, an apparatus includes electronic hardware circuitry configured to associate disk devices with containers based on a policy, allocate a disk device to a container based on the policy and allow access to the disk device from the container. In a further aspect, an article includes a non-transitory computer-readable medium that stores computer-executable instructions. The instructions cause a machine to associate disk devices with containers based on a policy, allocate a disk device to a container based on the policy and allow access to the disk device from the container.Type: GrantFiled: June 30, 2014Date of Patent: February 20, 2018Assignee: EMC IP HOLDING COMPANY LLCInventors: Eron D. Wright, Jimmy K. Seto
-
Patent number: 9891826Abstract: A Discard command is received which includes an address on a specific SSD of a plurality of SSDs configured as a RAID device, wherein the Discard command is associated with data associated with the address. In response to receiving the Discard command, a trim metadata flag is set in an entry associated with the address in a mapping table, wherein a trim metadata flag that is set indicates that a Discard command was received for a corresponding address.Type: GrantFiled: September 24, 2015Date of Patent: February 13, 2018Assignee: SK Hynix Inc.Inventors: Tae Il Um, Hwansoo Han, Mehryar Rahmatian
-
Patent number: 9875179Abstract: The semiconductor memory device includes: a memory unit including a plurality of memory blocks; a decoder suitable for storing bad block information about the plurality of memory blocks, and outputting the bad block information in response to an address signal; and a control logic suitable for controlling the memory unit and the decoder to update a status register (SR) code in response to the bad block information when the semiconductor memory device at a ready state enters a busy state, and to perform a general operation according to the updated SR code and a command.Type: GrantFiled: September 24, 2015Date of Patent: January 23, 2018Assignee: SK Hynix Inc.Inventor: Tai Kyu Kang
-
Patent number: 9874915Abstract: A mass data storage system includes a data manager that selects a subset of storage resources for storage of a data file and generates location metadata for the data file defining positions of each storage resource in the selected subset of storage resources. According to one implementation, the data manager further defines an extended file attribute associated with the location metadata.Type: GrantFiled: June 15, 2015Date of Patent: January 23, 2018Assignee: SEAGATE TECHNOLOGY LLCInventor: Guy David Frick
-
Patent number: 9864545Abstract: Systems, methods, and/or devices are used to automate read operations performed at an open erase block. In one aspect, the method includes: receiving a read command, at a storage device, to read data from non-volatile memory of the storage device. In response to receiving the read command, the method further includes: 1) reading data using a first set of memory operation parameters in response to a determination that the read command is not for reading data from a predefined portion of an open erase block (e.g., an erase block that is determined to be an open erase block) of the non-volatile memory and 2) reading data using a second set of memory operation parameters (i.e., the second set is distinct from the first set) in response to a determination that the read command is for reading data from the predefined portion of an open erase block of the non-volatile memory.Type: GrantFiled: October 28, 2015Date of Patent: January 9, 2018Assignee: SanDisk Technologies LLCInventors: Robert W. Ellis, Vidyabhushan Mohan, Jack Edward Frayer
-
Patent number: 9858188Abstract: Statistical data is used to enable or disable snooping on a bus of a processor. A command is received via a first bus or a second bus communicably coupling processor cores and caches of chiplets on the processor. Cache logic on a chiplet determines whether or not a local cache on the chiplet can satisfy a request for data specified in the command. In response to determining that the local cache can satisfy the request for data, the cache logic updates statistical data maintained on the chiplet. The statistical data indicates a probability that the local cache can satisfy a future request for data. Based at least in part on the statistical data, the cache logic determines whether to enable or disable snooping on the second bus by the local cache.Type: GrantFiled: June 8, 2015Date of Patent: January 2, 2018Assignee: International Business Machines CorporationInventors: Guy L. Guthrie, Hien M. Le, Hugh Shen, Derek E. Williams, Phillip G. Williams
-
Patent number: 9858242Abstract: Systems and methods may be provided to support memory access by packet communication and/or direct memory access. In one aspect, a memory controller may be provided for a processing device containing a plurality of computing resources. The memory controller may comprise a first interface to be coupled to a router. The first interface may be configured to transmit and receive packets. Each packet may comprise a header that may contain a routable address and a packet opcode specifying an operation to be performed in accordance with a network protocol. The memory controller may further comprise a memory bus port coupled to a plurality of memory slots that are configured to receive memory banks to form a memory and a controller core coupled to the first interface. The controller core may be configured to decode a packet received at the first interface and perform an operation specified in the received packet.Type: GrantFiled: December 6, 2016Date of Patent: January 2, 2018Assignee: KnuEdge IncorporatedInventors: Douglas A. Palmer, Ramon Zuniga
-
Patent number: 9858197Abstract: A cache management apparatus includes an access pattern analysis unit configured to analyze an access pattern of each of one or more pages present in a first cache by monitoring data input/output (I/O) requests, a page class management unit configured to determine a class of each of the pages based on results of the analysis performed by the access pattern analysis unit, and a page transfer management unit configured to transfer one or more pages classified into a first class including pages to be transferred, to a second cache based on results of the determination performed by the page class management unit.Type: GrantFiled: August 15, 2014Date of Patent: January 2, 2018Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Myung-June Jung, Ju-Pyung Lee
-
Patent number: 9852080Abstract: Efficiently generating selection masks for row selections within indexed address spaces is disclosed. In this regard, in one aspect, an indexed array circuit is provided, comprising a start indicator that indicates a start indexed array row of a row selection, and an end indicator that indicates an end indexed array row of the row selection. The indexed array circuit further comprises a plurality of indexed array rows ordered in a logical sequence, each comprising a row-level compare circuit. Each row-level compare circuit is configured to generate a selection mask indicator based on a first parallel comparison of subsets of bits of a logical address of the indexed array row with corresponding subsets of bits of the start indicator, and a second parallel comparison of subsets of bits of the logical address of the indexed array row with corresponding subsets of bits of the end indicator.Type: GrantFiled: March 31, 2016Date of Patent: December 26, 2017Assignee: QUALCOMM IncorporatedInventors: David Paul Hoff, Milind Ram Kulkarni, Benjamin John Bowers