Read-modify-write (rmw) Patents (Class 711/155)
-
Patent number: 11347609Abstract: In an approach to failed media channel recovery throttling, responsive to detecting a programming error on an addressable unit during programming of a block stripe, the block stripe is placed on a recovery/data migration queue. An error counter for the addressable unit on which the programming error occurred is incremented. The block stripes from the recovery/data migration queue are built excluding a specific channel containing the addressable unit on which the programming error occurred. Responsive to determining that the queue for the recovery/data migration is empty, building the block stripes resumes using the plurality of channels, where the specific channel containing the addressable unit on which the programming error occurred is included. Responsive to determining that a number of errors on a specific addressable unit exceeds a predetermined threshold based on the error counter for the specific addressable unit, the specific addressable unit is decommissioned.Type: GrantFiled: April 29, 2021Date of Patent: May 31, 2022Assignee: International Business Machines CorporationInventors: Matthew Szekely, Robert Edward Galbraith
-
Patent number: 11348934Abstract: According to one embodiment, a memory system includes a semiconductor memory and a controller. The semiconductor memory includes blocks each containing memory cells. The controller is configured to instruct the semiconductor memory to execute a first operation and a second operation. In the first operation and the second operation, the semiconductor memory selects at least one of the blocks, and applies at least one voltage to all memory cells contained in said selected blocks. A number of blocks to which said voltage is applied per unit time in the second operation is larger than that in the first operation.Type: GrantFiled: February 23, 2021Date of Patent: May 31, 2022Assignee: Kioxia CorporationInventors: Takehiko Amaki, Yoshihisa Kojima, Toshikatsu Hida, Marie Grace Izabelle Angeles Sia, Riki Suzuki, Shohei Asami
-
Patent number: 11327653Abstract: A storage system for continuing I/O without affecting drive box addition to a host computer includes: a plurality of drive boxes for connecting to a computer device that transmits commands for data reads or writes; and a storage controller connected to the drive boxes. A first drive box provides a first storage region to the computer device. The storage controller manages correspondence between the first storage region and a physical storage region of the drives constituting the first storage region. The first drive box receives a command for the first storage region from the computer device and transfers the command to the storage controller. The storage controller generates a data transfer command including a data storage destination based on the address management table, and transfers the command to the first drive box. The first drive box then transfers the data transfer command to the second drive box.Type: GrantFiled: March 5, 2020Date of Patent: May 10, 2022Assignee: HITACHI, LTD.Inventors: Nobuhiro Yokoi, Hirotoshi Akaike, Ryosuke Tatsumi, Koji Hosogi, Akira Yamamoto
-
Patent number: 11257271Abstract: In an aspect, an update unit can evaluate condition(s) in an update request and update one or more memory locations based on the condition evaluation. The update unit can operate atomically to determine whether to effect the update and to make the update. Updates can include one or more of incrementing and swapping values. An update request may specify one of a pre-determined set of update types. Some update types may be conditional and others unconditional. The update unit can be coupled to receive update requests from a plurality of computation units. The computation units may not have privileges to directly generate write requests to be effected on at least some of the locations in memory. The computation units can be fixed function circuitry operating on inputs received from programmable computation elements. The update unit may include a buffer to hold received update requests.Type: GrantFiled: September 26, 2016Date of Patent: February 22, 2022Assignee: Imagination Technologies LimitedInventors: Steven J. Clohset, Jason R. Redgrave, Luke T. Peterson
-
Patent number: 11222240Abstract: A data processing method for a convolutional neural network includes: (a) obtaining a matrix parameter of an eigenmatrix; (b) reading corresponding data in an image data matrix from a first buffer space based on the matrix parameter through a first bus, to obtain a next to-be-expanded data matrix, and sending and storing the to-be-expanded data matrix to a second preset buffer space through a second bus; (c) reading the to-be-expanded data matrix, and performing data expansion on the to-be-expanded data matrix to obtain expanded data; (d) reading a preset number of pieces of unexpanded data in the image data matrix, sending and storing the unexpanded data to the second preset buffer space, and updating, based on the unexpanded data, the to-be-expanded data matrix; and (e). repeating (c) and (d) until all data in the image data matrix is completely read out on the to-be-expanded data matrix.Type: GrantFiled: January 17, 2019Date of Patent: January 11, 2022Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yangming Zhang, Jianlin Gao, Heng Zhang
-
Patent number: 11210024Abstract: A computer-implemented method according to one embodiment includes initiating a read-modify-write (RMW) operation; assigning the RMW operation to a thread; identifying a storage device associated with the RMW operation; assign a log block within the storage device to the thread; determining a free shadow block location within the storage device; creating a copy of data to be written to the storage device during the RMW operation; writing the copy of the data to the free shadow block location within the storage device; updating the log block within the storage device to point to the free shadow block location to which the copy of the data is written; and writing the data to one or more blocks of a home area of the storage device.Type: GrantFiled: December 16, 2019Date of Patent: December 28, 2021Assignee: International Business Machines CorporationInventors: Zhenxing Han, Robert Michael Rees, Steven Robert Hetzler, Veera W. Deenadhayalan
-
Patent number: 11199999Abstract: A processing device, operatively coupled with a memory device, is configured to receive a write request identifying data to be stored in a segment of the memory device. The processing device determines a write-to-write (W2W) time interval for the segment and determines whether the W2W time interval falls within a first W2W time interval range, the first W2W time interval range corresponds to a first pre-read voltage level. Responsive to the W2W time interval for the segment falling within the first W2W interval range, the processing device performs a pre-read operation on the segment using the first pre-read voltage level. The processing device identifies a subset of the data to be stored in the segment comprising bits of data that are different than corresponding bits of the data stored in the segment. The processing device further performs a write operation to store the subset of the data in the segment.Type: GrantFiled: January 30, 2020Date of Patent: December 14, 2021Assignee: Micron Technology, Inc.Inventors: Ying Yu Tai, Jiangli Zhu
-
Patent number: 11199967Abstract: Techniques and devices for managing power consumption of a memory system using loopback are described. When a memory system is in a first state (e.g., a deactivated state), a host device may send a signal to change one or more components of the memory system to a second state (e.g., an activated state). The signal may be received by one or more memory devices, which may activate one or more components based on the signal. The one or more memory devices may send a second signal to a power management component, such as a power management integrated circuit (PMIC), using one or more techniques. The second signal may be received by the PMIC using a conductive path running between the memory devices and the PMIC. Based on receiving the second signal or some third signal that is based on the second signal, the PMIC may enter an activated state.Type: GrantFiled: March 1, 2019Date of Patent: December 14, 2021Assignee: Micron Technology, Inc.Inventors: Thomas H. Kinsley, Matthew A. Prather
-
Patent number: 11196647Abstract: A packet and inspection system for monitoring the performance of one or more flows on a packet network comprises a processor and memory coupled to each other and to a network bus. The memory stores instructions to be executed by the processor and data to be modified by the execution of the instructions. A processor-controlled arbiter is coupled with the processor and the network bus, and upon reception of a packet on the bus or prior to transmission of a packet on the bus for one of said flows, the arbiter requests execution by the processor of selected instructions stored in the memory by providing the processor with the address of the selected instructions in the memory. The memory provides the processor with data associated with the selected instructions, and the processor modifies the data upon execution of the selected instructions.Type: GrantFiled: September 28, 2020Date of Patent: December 7, 2021Assignee: Accedian Networks Inc.Inventor: Steve Rochon
-
Patent number: 11188239Abstract: A Data Storage Device (DSD) includes a Non-Volatile Memory (NVM) for storing data. A processor of the DSD receives a command from a host to access data in the NVM, and performs the command to access data in the NVM. The DSD further includes a host-trusted module functionally isolated from at least a portion of the DSD. The host-trusted module is configured to receive an instruction from the host, and perform an operation based on the instruction. According to one aspect, the operation includes a predetermined atomic operation to modify data stored in the NVM.Type: GrantFiled: March 28, 2019Date of Patent: November 30, 2021Assignee: Western Digital Technologies, Inc.Inventors: Shay Benisty, Alon Marcu, Judah G. Hahn
-
Patent number: 11157213Abstract: An integrated circuit (IC) memory device encapsulated within an IC package. The memory device includes first memory regions configured to store lists of operands; a second memory region configured to store a list of results generated from the lists of operands; and at least one third memory region. A communication interface of the memory device can receive requests from an external processing device; and an arithmetic compute element matrix can access memory regions of the memory device in parallel. When the arithmetic compute element matrix is processing the lists of operands in the first memory regions and generating the list of results in the second memory region, the external processing device can simultaneously access the third memory region through the communication interface to load data into the third memory region, or retrieve results that have been previously generated by the arithmetic compute element matrix.Type: GrantFiled: October 12, 2018Date of Patent: October 26, 2021Assignee: Micron Technology, Inc.Inventor: Gil Golov
-
Patent number: 11061820Abstract: Optimizing access to page table entries in processor-based devices is disclosed. In this regard, an instruction decode stage of an execution pipeline of a processor-based device receives a memory access instruction including a virtual memory address. A page table walker circuit of the processor-based device determines, based on the memory access instruction, a number T of page table walk levels to traverse, where T is greater than zero (0) and less than or equal to a number of page table walk levels required to fully translate the virtual memory address. The page table walker next performs a page table walk of T page table walk levels of the multilevel page table, and identifies a physical memory address corresponding to a page table entry of the Tth page table walk level. The processor-based device then performs a memory access operation indicated by the memory access instruction using the physical memory address.Type: GrantFiled: August 30, 2019Date of Patent: July 13, 2021Assignee: Microsoft Technology Licensing, LLCInventor: Thomas Philip Speier
-
Patent number: 11055025Abstract: A semiconductor memory device includes a memory device, a Read-Modify-Write (RMW) controller configured to generate a merge command corresponding to at least one command among a read command and a write command which are externally provided, to receive a processing result of the merge command, and to generate a response for the at least one command corresponding to the merge command. The semiconductor memory device further includes a memory controller configured to control the memory device by receiving the merge command and to provide the processing result of the merge command to the RMW controller.Type: GrantFiled: October 25, 2019Date of Patent: July 6, 2021Assignees: SK hynix Inc., Seoul National University R&DB FoundationInventors: Hyokeun Lee, Moonsoo Kim, Hyun Kim, Hyuk-Jae Lee
-
Patent number: 10983911Abstract: In one embodiment, a method is operable in an over-provisioned storage device comprising a cache region and a main storage region. The method includes compressing incoming data, generating a compression parameter for the compressed data, and storing at least a portion of the compressed data in chunks in the main storage region of the storage device. The method also includes predicting when to store other chunks of the compressed data in the cache region based on the compression parameter.Type: GrantFiled: September 1, 2017Date of Patent: April 20, 2021Assignee: SEAGATE TECHNOLOGY LLCInventor: Andrew M. Kowles
-
Patent number: 10949346Abstract: A data processing system includes a plurality of processing units and a system memory coupled to a memory controller. The system memory includes a persistent memory device and a non-persistent cache interposed between the memory controller and the persistent memory device. The memory controller receives a flush request of a particular processing unit among the plurality of processing units, the flush request specifying a target address. The memory controller, responsive to the flush request, ensures flushing of a target cache line of data identified by target address from the non-persistent cache into the persistent memory device.Type: GrantFiled: November 8, 2018Date of Patent: March 16, 2021Assignee: International Business Machines CorporationInventors: Derek E. Williams, Guy L. Guthrie, John Dodson
-
Patent number: 10891084Abstract: Aspects of the present disclosure relate to an interconnect comprising an interface to couple to a master device, the interface comprising buffer storage. The interface is configured to receive a request from the master device for data comprising a plurality of data blocks, the master device requiring the data blocks in a defined order. A data collator is configured to: receive the request; issue a data pull request to cause the interface to allocate buffer space in the buffer storage for buffering the requested data; and responsive to receiving a confirmation that the buffer space is allocated, provide the requested data to the buffer storage. The interface is configured to employ the buffer storage to enable re-ordering of the plurality of data blocks of the requested data, prior to outputting the plurality of data blocks to the master device; and output the plurality of data blocks to the master device in the defined order.Type: GrantFiled: March 14, 2019Date of Patent: January 12, 2021Assignee: Arm LimitedInventors: Alex James Waugh, Geoffray Mattheiu Lacourba, Andrew John Turner, Sergio Schuler
-
Patent number: 10877698Abstract: A semiconductor device may include a media controller configured to output a write requested address when a write request to a nonvolatile memory device is provided from a host; and a cold address manager. The cold address manager may include a stack storing meta data for the write requested address, region information storage configured to manage addresses of the nonvolatile memory device with regions such that length of a region of the regions may vary after a predetermined period, a cold address detector configured to update the stack and the region information storage after the predetermined period and to detect whether an address of the nonvolatile memory device is a cold address, the cold address having write requests performed at less than a predetermined level.Type: GrantFiled: June 6, 2019Date of Patent: December 29, 2020Assignees: SK hynix Inc., Seoul National University R&DB FoundationInventors: Hyunmin Jung, Sunwoong Kim, Hyokeun Lee, Woojae Shin, Hyuk-Jae Lee
-
Patent number: 10838859Abstract: Methods and apparatus for controlling garbage collection in solid state devices (SSDs) are provided. Once such apparatus includes a non-volatile memory (NVM), and a controller communicatively coupled to a host device and the NVM, and configured to calculate an invalidation factor for each of a plurality of blocks in the NVM, wherein the invalidation factor is determined based on a percentage of invalid pages in a respective block of the plurality of blocks and a most recent time of invalidation of one or more pages in the respective block; classify each block of the plurality of blocks into one of three categories based on the calculated invalidation factor; and perform a garbage collection operation for the NVM, wherein the garbage collection operation includes selecting a source block for the garbage collection operation based on the classifications of the plurality of blocks.Type: GrantFiled: September 25, 2018Date of Patent: November 17, 2020Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventors: Vishwas Saxena, Abhijit K. Rao
-
Patent number: 10706014Abstract: Metadata of each file of a group of files of a storage and chunk file metadata are analyzed to identify one or more file segment data chunks that are not referenced by the group of files of the storage. Fragmented chunk files to be combined together are identified based at least in part on the one or more identified file segment data chunks. The chunk file metadata is updated with an update that concurrently reflects the removal of at least a portion of the one or more file segment data chunks that are not referenced by the group of files and the combination of the identified fragmented chunk files.Type: GrantFiled: February 19, 2019Date of Patent: July 7, 2020Assignee: Cohesity, Inc.Inventors: Anubhav Gupta, Anirvan Duttagupta
-
Patent number: 10678436Abstract: A storage system performs garbage collection with data compression. A storage controller in the storage system determines a garbage collection directive by evaluating the amount of reclaimable space relative to a target amount of reclaimable space. Garbage collection is performed using data compression tunable to compression aggressiveness according to the garbage collection directive.Type: GrantFiled: May 29, 2018Date of Patent: June 9, 2020Assignee: Pure Storage, Inc.Inventors: Yanwei Jiang, Aswin Karumbunathan, Naveen Neelakantam, Kiron Vijayasankar, Bo Feng, Joern Engel
-
Patent number: 10607694Abstract: A memory system includes a memory device comprising first to Nth memory regions, wherein N is a natural number equal to or more than 2, and a memory controller suitable for checking numbers of first logic level data which are contained in first to Nth data groups to be written to the memory device, respectively, and writing the first to Nth data groups to the first to Nth memory regions in order based on the checked numbers.Type: GrantFiled: September 7, 2018Date of Patent: March 31, 2020Assignee: SK hynix Inc.Inventors: Seung-Gyu Jeong, Won-Gyu Shin, Jung-Hyun Kwon, Do-Sun Hong
-
Patent number: 10503642Abstract: A data processing method includes allocating a tag entry in a tag array for a data block; allocating a data entry in a data array for the data block when the data block is actively shared; and de-allocating the data entry when the data block is temporarily private or gets evicted in the data array.Type: GrantFiled: August 25, 2017Date of Patent: December 10, 2019Assignees: Huawei Technologies Co., Ltd., National University of SingaporeInventors: Yuan Yao, Tulika Mitra, Zhiguo Ge, Naxin Zhang
-
Patent number: 10482337Abstract: Convolutional neural network (CNN) components can operate to provide various speed-ups to improve upon or operate as part of an artificial neural network (ANN). A convolution component performs convolution operations that extract data from one or more images, and provides the data to one or more rectified linear units (RELUs). The RELUs are configured to generate non-linear convolution output data. A pooling component generates pooling outputs in parallel with the convolution operations via a pipelining process based on a pooling window for a subset of the non-linear convolution output data. A fully connected (FC) component configured to form an artificial neural network (ANN) that provides ANN outputs based on the pooling outputs and enables a recognition of a pattern in the one or more images based on the ANN outputs. Layers of the FC component are also able to operate in parallel in another pipelining process.Type: GrantFiled: September 29, 2017Date of Patent: November 19, 2019Assignee: Infineon Technologies AGInventor: Prakash Kalanjeri Balasubramanian
-
Patent number: 10459850Abstract: Systems, apparatuses, and methods for implementing virtualized process isolation are disclosed. A system includes a kernel and multiple guest virtual machines (VMs) executing on the system's processing hardware. Each guest VM includes a vShim layer for managing kernel accesses to user space and guest accesses to kernel space. The vShim layer also maintains a set of page tables separate from the kernel page tables. In one embodiment, data in the user space is encrypted and the kernel goes through the vShim layer to access user space data. When the kernel attempts to access a user space address, the kernel exits and the vShim layer is launched to process the request. If the kernel has permission to access the user space address, the vShim layer copies the data to a region in kernel space and then returns execution to the kernel. The vShim layer prevents the kernel from accessing the user space address if the kernel does not have permission to access the user space address.Type: GrantFiled: September 20, 2016Date of Patent: October 29, 2019Assignee: Advanced Micro Devices, Inc.Inventor: David A. Kaplan
-
Patent number: 10447316Abstract: Apparatuses and methods for pipelining memory operations with error correction coding are disclosed. A method for pipelining consecutive write mask operations is disclosed wherein a second read operation of a second write mask operation occurs during error correction code calculation of a first write mask operation. The method may further including writing data from the first write mask operation during the error correction code calculation of the second write mask operation. A method for pipelining consecutive operations is disclosed where a first read operation may be cancelled if the first operation is not a write mask operation. An apparatus including a memory having separate global read and write input-output lines is disclosed.Type: GrantFiled: December 19, 2014Date of Patent: October 15, 2019Assignee: Micron Technology, Inc.Inventors: Wei Bing Shang, Yu Zhang, Hong Wen Li, Yu Peng Fan, Zhong Lai Liu, En Peng Gao, Liang Zhang
-
Patent number: 10445242Abstract: The present disclosure relates to caches, methods, and systems for using an invalidation data area. The cache can include a journal configured for tracking data blocks, and an invalidation data area configured for tracking invalidated data blocks associated with the data blocks tracked in the journal. The invalidation data area can be on a separate cache region from the journal. A method for invalidating a cache block can include determining a journal block tracking a memory address associated with a received write operation. The method can also include determining a mapped journal block based on the journal block and on an invalidation record. The method can also include determining whether write operations are outstanding. If so, the method can include aggregating the outstanding write operations and performing a single write operation based on the aggregated write operations.Type: GrantFiled: November 21, 2016Date of Patent: October 15, 2019Assignee: Western Digital Technologies, Inc.Inventor: Pulkit Misra
-
Patent number: 10423215Abstract: Methods and apparatus for adaptive power profiling in a baseband processing system. In an exemplary embodiment, an apparatus includes one or more processing engines. Each processing engine performs at least one data processing function. The apparatus also includes an adaptive power profile (APP) and a job manager that receives job requests for data processing. The job manager allocates the data processing associated with the job requests to the processing engines based on the adaptive power profile. The adaptive power profile identifies a first group of the processing engines to perform the data processing associated with the job requests, and identifies remaining processing engines to be set to a low power mode.Type: GrantFiled: May 15, 2017Date of Patent: September 24, 2019Assignee: Cavium, LLCInventors: Kalyana S. Venkataraman, Gregg A. Bouchard, Eric Marenger, Ahmed Shahid
-
Patent number: 10402937Abstract: A method for rendering graphics frames allocates rendering work to multiple graphics processing units (GPUs) that are configured to allow access to pages of data stored in locally attached memory of a peer GPU. The method includes the steps of generating, by a first GPU coupled to a first memory circuit, one or more first memory access requests to render a first primitive for a first frame, where at least one of the first memory access requests targets a first page of data that physically resides within a second memory circuit coupled to a second GPU. The first GPU requests the first page of data through a first data link coupling the first GPU to the second GPU and a register circuit within the first GPU accumulates an access request count for the first page of data. The first GPU notifies a driver that the access request count has reached a specified threshold.Type: GrantFiled: December 28, 2017Date of Patent: September 3, 2019Assignee: NVIDIA CorporationInventors: Rouslan L. Dimitrov, Kirill A. Dmitriev, Andrei Khodakovsky, Tzyywei Hwang, Wishwesh Anil Gandhi, Lacky Vasant Shah
-
Patent number: 10394487Abstract: A memory system may include: a memory device including memory blocks each memory block including pages, each page including memory cells which are coupled to a word line for storing data; and a controller including a memory, the controller receiving a write command and a read command from a host, storing write data corresponding to the write command in the memory, transmitting and storing the write data stored in the memory to and in at least one first memory device buffer coupled to a first memory block in a page of which the write data are to be stored, reading read data corresponding to the read command from a page of a second memory block, storing the read data in at least one second memory device buffer coupled to the second memory block, and storing the read data stored in the second memory device buffer, in the memory.Type: GrantFiled: February 22, 2017Date of Patent: August 27, 2019Assignee: SK hynix Inc.Inventor: Jong-Min Lee
-
Patent number: 10331559Abstract: Exemplary methods, apparatuses, and systems include a first input/output (I/O) filter receiving, from a first filter module within a virtualization stack of a host computer, an input/output (I/O) request originated by a virtual machine and directed to a first virtual disk. The first I/O filter determines to redirect the I/O request to a second virtual disk and, in response, forwards the I/O request to a second I/O filter associated with the second virtual disk. The first I/O filter is a part of a first instance of a filter framework within the host computer and the second I/O filter is part of a second, separate instance of the filter framework.Type: GrantFiled: August 27, 2015Date of Patent: June 25, 2019Assignee: VMware, Inc.Inventors: Christoph Klee, Adrian Drzewiecki, Aman Nijhawan
-
Patent number: 10297298Abstract: Apparatuses and methods for providing internal clock signals of different clock frequencies in a semiconductor device are described in the present application. An example apparatus includes a read command buffer and a read data output circuit. The read command buffer buffers a read command responsive to a first clock signal and provides the read command responsive to a second clock signal. The read data output circuit receives a plurality of bits of data in parallel when activated by the read command from the read command buffer, and provides the plurality of bits of data serially responsive to input/output (IO) clock signals. A data clock timing circuit provides the IO clock signals having a first clock frequency in a first mode and having a second clock frequency in a second mode, and further provides the second clock signal having the first clock frequency in the first and second modes.Type: GrantFiled: October 11, 2017Date of Patent: May 21, 2019Assignee: Micron Technology, Inc.Inventor: Jens Polney
-
Patent number: 10198373Abstract: Disclosed aspects relate to a computer system having a plurality of processor chips and a plurality of memory buffer chips and a methodology for operating the computer system. The memory buffer chips may be communicatively coupled to at least one memory module which can be configured for storing memory lines and assigned to the memory buffer chip. The processor chips can include a cache configured for caching memory lines. The processor chips may be communicatively coupled to the memory buffer chips via a memory-buffer-chip-specific bidirectional serial point-to-point communication connection. The processor chips can be configured for transferring memory lines between the cache of the processor chip and the memory modules via the respective memory-buffer-chip-specific bidirectional serial point-to-point communication connection.Type: GrantFiled: February 15, 2018Date of Patent: February 5, 2019Assignee: International Business Machines CorporationInventor: Burkhard Steinmacher-Burow
-
Patent number: 10114557Abstract: Systems, methods and/or devices are used to enable identification of hot regions to enhance performance and endurance of a non-volatile storage device. In one aspect, the method includes (1) receiving a plurality of input/output (I/O) requests to be performed in a plurality of regions in a logical address space of a host, and (2) performing one or more operations for each region of the plurality of regions in the logical address space of the host, including (a) determining whether the region is accessed by the plurality of I/O requests more than a predetermined threshold number of times during a predetermined time period, (b) if so, marking the region with a hot region indicator, and (c) while the region is marked with the hot region indicator, identifying open blocks associated with the region, and marking each of the identified open blocks with a hot block indicator.Type: GrantFiled: July 3, 2014Date of Patent: October 30, 2018Assignee: SANDISK TECHNOLOGIES LLCInventors: Dharani Kotte, Akshay Mathur, Chayan Biswas, Sumant K. Patro
-
Patent number: 10108569Abstract: In one embodiment, a computer-implemented method includes assigning a time budget to each of a plurality of virtual functions in a single-root input/output (SRIOV) environment, where a first time budget of a first virtual function indicates a quantity of cycles on an engine of the SRIOV environment allowed to the first virtual function within a time slice. A plurality of requests issued by the plurality of virtual functions are selected by a computer processor, where the selecting excludes requests issued by virtual functions that have used their associated time budgets of cycles in a current time slice. The selected plurality of requests are delivered to the engine for processing. The time budgets of the virtual functions are reset and a new time slice begins, at the end of the current time slice.Type: GrantFiled: September 3, 2015Date of Patent: October 23, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Mark A. Check, Vincenzo Condorelli, Nihad Hadzic, William Santiago Fernandez
-
Patent number: 10102165Abstract: In one embodiment, a computer-implemented method includes assigning a time budget to each of a plurality of virtual functions in a single-root input/output (SRIOV) environment, where a first time budget of a first virtual function indicates a quantity of cycles on an engine of the SRIOV environment allowed to the first virtual function within a time slice. A plurality of requests issued by the plurality of virtual functions are selected by a computer processor, where the selecting excludes requests issued by virtual functions that have used their associated time budgets of cycles in a current time slice. The selected plurality of requests are delivered to the engine for processing. The time budgets of the virtual functions are reset and a new time slice begins, at the end of the current time slice.Type: GrantFiled: November 25, 2014Date of Patent: October 16, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Mark A. Check, Vincenzo Condorelli, Nihad Hadzic, William Santiago Fernandez
-
Patent number: 9880764Abstract: Systems, methods, and computer readable media are disclosed. A map including the number of dirty cache pages stored in the flash disk cache for each VLUN of the plurality of VLUNs on the storage system is maintained, by the storage system. A flash disk cache error requiring the storage system to take the flash disk cache offline is detected. In response to detecting the flash disk cache error a first one or more VLUNs of the plurality of VLUNs with at least one dirty cache page stored in the flash disk cache are identified by the storage system based on the map. The first one or more VLUNs are taken offline by the storage system. The flash disk cache is taken offline by the storage system. A second one or more VLUNs comprising VLUNs of the plurality of VLUNs without dirty cache pages stored in the flash disk cache are maintained online by the storage system.Type: GrantFiled: March 30, 2016Date of Patent: January 30, 2018Assignee: EMC IP Holding Company LLCInventors: Xinlei Xu, Jian Gao, Lifeng Yang, Geng Han, Jibing Dong, Lili Chen
-
Patent number: 9875039Abstract: Apparatus and method for performing wear leveling are disclosed. An ordered list of references to each of a set of memory blocks is stored. A set of memory blocks in the ordered list is sequentially allocating. The allocated set of memory blocks in the ordered list are erased in the sequence in which they were allocated.Type: GrantFiled: April 30, 2015Date of Patent: January 23, 2018Assignee: SanDisk Technologies LLCInventors: Chetan Agrawal, Dinesh Agarwal, Vijay Sivasankaran
-
Patent number: 9854072Abstract: An egress packet modifier includes a script parser and a pipeline of processing stages. Rather than performing egress modifications using a processor that fetches and decodes and executes instructions in a classic processor fashion, and rather than storing a packet in memory and reading it out and modifying it and writing it back, the packet modifier pipeline processes the packet by passing parts of the packet through the pipeline. A processor identifies particular egress modifications to be performed by placing a script code at the beginning of the packet. The script parser then uses the code to identify a specific script of opcodes, where each opcode defines a modification. As a part passes through a stage, the stage can carry out the modification of such an opcode. As realized using current semiconductor fabrication process, the packet modifier can modify 200M packets/second at a sustained rate of up to 100 gigabits/second.Type: GrantFiled: August 4, 2015Date of Patent: December 26, 2017Assignee: Netronome Systems, Inc.Inventors: Chirag P. Patel, Gavin J. Stark
-
Patent number: 9830281Abstract: A semiconductor device includes a first memory controller configured to output a first control signal to first and second external memories through a first memory interface, a second memory controller configured to output a second control signal to the second external memory through a second memory interface, an inter-device interface for communicating with another semiconductor device, terminals configured to output the second control signal that has passed through the second memory interface, and a first selector configured to select between the second memory interface and the inter-device interface in accordance with an operation mode of the semiconductor device and to couple the selected interface to the terminals.Type: GrantFiled: November 19, 2013Date of Patent: November 28, 2017Assignee: Renesas Electronics CorporationInventors: Kenichiro Omura, Ryohei Yoshida, Takanobu Naruse, Seiichi Saito
-
Patent number: 9818462Abstract: Apparatuses and methods for providing internal clock signals of different clock frequencies in a semiconductor device are described in the present application. An example apparatus includes a read command buffer and a read data output circuit. The read command buffer buffers a read command responsive to a first clock signal and provides the read command responsive to a second clock signal. The read data output circuit receives a plurality of bits of data in parallel when activated by the read command from the read command buffer, and provides the plurality of bits of data serially responsive to input/output (IO) clock signals. A data clock timing circuit provides the IO clock signals having a first clock frequency in a first mode and having a second clock frequency in a second mode, and further provides the second clock signal having the first clock frequency in the first and second modes.Type: GrantFiled: January 19, 2017Date of Patent: November 14, 2017Assignee: Micron Technology, Inc.Inventor: Jens Polney
-
Patent number: 9812177Abstract: A circuit includes a first latch for generating a first latched signal; and a first comparator for comparing the first latched signal and a write address, and generating a first comparator signal. The circuit includes a first logic circuit for receiving the first comparator signal and a fourth latched signal, and generating a first logic circuit output signal; and a second latch for receiving the first logic circuit output signal and generating a second latched signal. The circuit includes a third latch for generating a third latched signal; and a second comparator for comparing the third latched signal and a read address, and generating a second comparator signal. The circuit includes a second logic circuit for receiving the second comparator signal and the second latched signal, and generating a second logic circuit signal; and a fourth latch for receiving the second logic circuit signal and generating the fourth latched signal.Type: GrantFiled: August 11, 2015Date of Patent: November 7, 2017Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD.Inventors: Bing Wang, Kuoyuan (Peter) Hsu
-
Patent number: 9804842Abstract: An apparatus and method for efficiently managing the architectural state of a processor.Type: GrantFiled: December 23, 2014Date of Patent: October 31, 2017Assignee: INTEL CORPORATIONInventors: Jesus Corbal San Adrian, Dennis R. Bradford, Benjamin C. Chaffin, Taraneh Bahrami, Jonathan C. Hall, Thomas B. Maciukenas, Roger Gramunt, Rohan Sharma
-
Patent number: 9734063Abstract: A computing system that uses a Scale-Out NUMA (“soNUMA”) architecture, programming model, and/or communication protocol provides for low-latency, distributed in-memory processing. Using soNUMA, a programming model is layered directly on top of a NUMA memory fabric via a stateless messaging protocol. To facilitate interactions between the application, OS, and the fabric, soNUMA uses a remote memory controller—an architecturally-exposed hardware block integrated into the node's local coherence hierarchy.Type: GrantFiled: February 27, 2015Date of Patent: August 15, 2017Assignee: ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE (EPFL)Inventors: Stanko Novakovic, Alexandros Daglis, Boris Robert Grot, Edouard Bugnion, Babak Falsafi
-
Patent number: 9645924Abstract: A computer processor determines an over-provisioning ratio and a host write pattern. The computer processor determines a write amplification target based on the host write pattern and the over-provisioning ratio. The computer processor determines a staleness threshold, wherein the staleness threshold corresponds to a ratio of valid pages of a block to total pages of the block. The computer processor erases a first block having a staleness which exceeds the staleness threshold.Type: GrantFiled: December 16, 2013Date of Patent: May 9, 2017Assignee: International Business Machines CorporationInventors: Timothy J. Fisher, Aaron D. Fry, Samuel K. Ingram, Lincoln T. Simmons
-
Patent number: 9626260Abstract: A read/write cache device and method persistent in the event of a power failure are disclosed herein. The read/write cache device includes a meta-information part, a recency/frequency (RF) table part, a mapping table part, and a log area. The meta-information part provides information about whether metadata has integrity and information about the version of metadata stored in two metadata regions. The RF table part provides information about the recency and frequency of each of low-speed segments of a plurality of high-speed and low-speed segments and information about whether each of the low-speed segments is cached, in order to maintain the consistency of the metadata. The mapping table part provides information about a low-speed segment that is cached to each of the high-speed segments. The log area provides changed caching information that is not applied into the mapping table part.Type: GrantFiled: May 28, 2015Date of Patent: April 18, 2017Assignee: JUNGWON UNIVERSITY INDUSTRY ACADEMY COOPERATION CORPS.Inventor: Sung Hoon Baek
-
Patent number: 9558821Abstract: Provided are a resistive memory device and a method of the resistive memory device. The method of operating the resistive memory device includes performing a pre-read operation on memory cells in response to a write command; performing an erase operation on one or more first memory cells on which a reset write operation is to be performed, determined based on a result of comparing pre-read data from the pre-read operation with write data; and performing set-direction programming on at least some memory cells from among the erased one or more first memory cells and on one or more second memory cells on which a set write operation is to be performed.Type: GrantFiled: March 12, 2015Date of Patent: January 31, 2017Assignee: Samsung Electronics Co., Ltd.Inventors: Hyun-Kook Park, Dae-Seok Byeon, Yeong-Taek Lee, Hyo-Jin Kwon, Yong-Kyu Lee
-
Patent number: 9558796Abstract: Enhanced memory circuits are described that maintain coherency between concurrent memory reads and writes in a pipelined memory architecture. The described memory circuits can maintain data coherency regardless of the amount of pipelining applied to the memory inputs and/or outputs. Moreover, these memory circuits may be implemented as dedicated hard circuits in a field programmable gate array (FPGA) or other programmable logic device (PLD), and can be supplemented with user-configurable logic to achieve coherency in a variety of applications.Type: GrantFiled: October 28, 2014Date of Patent: January 31, 2017Assignee: Altera CorporationInventors: Carl Ebeling, Pohrong Rita Chu
-
Patent number: 9535842Abstract: Each computing node of a distributed computing system may implement a hardware mechanism at the network interface for message driven prefetching of application data. For example, a parallel data-intensive application that employs function shipping may distribute respective portions of a large data set to main memory on multiple computing nodes. The application may send messages to one of the computing nodes referencing data that is stored locally on the node. For each received message, the network interface on the recipient node may extract the reference, initiate the prefetching of referenced data into a local cache (e.g., an LLC), and then store the message for subsequent interpretation and processing by a local processor core. When the processor core retrieves a stored message for processing, the referenced data may already be in the LLC, avoiding a CPU stall while retrieving it from memory. The hardware mechanism may be configured via software.Type: GrantFiled: August 28, 2014Date of Patent: January 3, 2017Assignee: Oracle International CorporationInventors: Herbert D. Schwetman, Jr., Mohammad Arslan Zulfiqar, Pranay Koka
-
Patent number: 9513830Abstract: The disclosed embodiments are directed to methods and apparatuses for providing efficient and enhanced protection of data stored in a nonvolatile memory system. The methods and apparatuses involve a system controller for a plurality of nonvolatile memory devices in the nonvolatile memory system that is capable of protecting data using two layers of data protection, including inter-card card stripes and intra-card page stripes.Type: GrantFiled: October 12, 2015Date of Patent: December 6, 2016Assignee: International Business Machines CorporationInventors: Holloway H. Frost, Charles J. Camp, Kenneth Scianna, Lance W. Shelton
-
Patent number: 9448946Abstract: Systems, methods and/or devices are used to enable a stale data mechanism. In one aspect, the method includes (1) receiving a write command specifying a logical address to which to write, (2) determining whether a stale flag corresponding to the logical address is set, (3) in accordance with a determination that the stale flag is not set, setting the stale flag and releasing the write command to be processed, and (4) in accordance with a determination that the stale flag is set, detecting an overlap, wherein the overlap indicates two or more outstanding write commands are operating on the same memory space.Type: GrantFiled: July 15, 2014Date of Patent: September 20, 2016Assignee: SANDISK TECHNOLOGIES LLCInventors: James M. Higgins, Theron W. Virgin