Abstract: An apparatus in one embodiment comprises at least one processing device. The at least one processing device is configured to monitor performance of respective ones of a plurality of paths for accessing a logical storage device, and responsive to detection of at least one specified condition in the monitored performance relating to at least a subset of the paths, to move at least one application from a first container that utilizes a first access protocol to access the logical storage device to a second container that utilizes a second access protocol different than the first access protocol to access the logical storage device. For example, in some embodiments, the at least one processing device is configured to move an application from first container that utilizes a SCSI access protocol to a second container that utilizes an NVMe access protocol, and vice versa, responsive to detected performance issues.
Type:
Grant
Filed:
December 8, 2020
Date of Patent:
August 9, 2022
Assignee:
EMC IP Holding Company LLC
Inventors:
Amit Pundalik Anchi, Sanjib Mallick, Vinay G. Rao, Arieh Don
Abstract: It is an object of the present invention to provide a technique capable of performing user estimation without making the user aware of it. The appliance has a first sensor input unit that inputs a plurality of control commands for controlling an operation, and estimates the user based on the control command input to the first sensor input unit and an operation feature amount at the time of inputting the control command. The first sensor input unit includes a touch panel. The operation feature amount includes an operation position of the touch panel, an electrostatic capacitance value corresponding to a pressing of the touch panel, and a time-dependent change pattern thereof.
Abstract: Embodiments of the present disclosure relate to a method for managing backup data, an electronic device, and a computer program product. This method comprises: determining a number of times of space recycling operations that have been executed on a backup data block; determining, based on the number of times, a current popularity of the backup data block, the current popularity at least indicating a probability that the backup data block will be recycled in a to-be-executed space recycling operation; and moving, based on a determination that a former storage area where the backup data block is located does not correspond to the current popularity, the backup data block to a target storage area corresponding to the popularity. In this way, data with similar popularity can be managed in a more centralized manner, thereby reducing data rewriting caused by subsequent space recycling.
Abstract: Various implementations described herein refer to a method for providing single port memory with a bitcell array arranged in columns and rows. The method may include coupling a wordline to the single port memory including coupling the wordline to the columns of the bitcell array. The method may include performing multiple memory access operations concurrently in the single port memory including performing a read operation in one column of the bitcell array using the wordline while performing a write operation in another column of the bitcell array using the wordline, or performing a write operation in one column of the bitcell array using the wordline while performing a read operation in another column of the bitcell array using the same wordline.
Type:
Grant
Filed:
October 12, 2019
Date of Patent:
July 12, 2022
Assignee:
Arm Limited
Inventors:
Lalit Gupta, Nicolaas Klarinus Johannes Van Winkelhoff, Bo Zheng, El Mehdi Boujamaa, Fakhruddin Ali Bohra
Abstract: A dynamic random access memory (DRAM) includes first and second data buses, and first and second command and address (C/A) buses. The first data bus conveys a write data to the DRAM. The second data bus conveys read data from the DRAM. The first and second C/A buses are respectively associated with the first and second data buses. In one embodiment, the first data bus conveys the write data to a first bank of memory of the DRAM simultaneously as the second data bus conveys the read data from a second bank of memory of the DRAM. In another embodiment, the first data bus conveys the write data to a first rank of memory of the DRAM simultaneously as the second data bus conveys read data from a second rank of memory of the DRAM.
Abstract: A method of streaming between a storage device and a secondary device includes receiving, by the storage device, from the secondary device, a memory read request command including a memory address of the storage device corresponding to a stream identity, the stream identity being unique between the storage device and the secondary device; streaming, by the storage device, data between the storage device and the secondary device by transferring the data corresponding to the memory address of the storage device to the secondary device; determining, by the storage device, that the data requested by the secondary device in the memory read request command is transferred to the secondary device; and ending, by the storage device, the streaming between the storage device and the secondary device.
Abstract: A method for managing communication between a Universal Flash Storage (UFS) device and a UFS host includes determining at least one path of payload data flow along at least one of a transmission lane of the UFS host and a transmission lane of the UFS device. Based on the determined at least one path of the payload data flow, the method includes initiating, by operating at least one of the UFS host and the UFS device, at least one Hibernate state entry action. Further, the method includes initiating, by operating the at least one of the UFS host and the UFS device, at least one Hibernate state exit action after completion of transfer of a pre-determined number of data frames of the payload data between the UFS host and the UFS device.
Abstract: The present invention provides a memory controller configured to access a plurality of channels, wherein each of the channels includes a plurality flash memory chips, and the memory controller includes a flash translation layer and a plurality of control modules. The flash translation layer is configured to generate commands with corresponding physical addresses of at least one of the channels. The plurality of control modules are connected to the plurality of channels, respectively, and each of the control modules operates independently to receive the corresponding command with the corresponding physical address from the flash translation layer, to access the flash memory chips within the corresponding channels.
Abstract: A system includes a data storage device and a host computing device. The data storage device includes a host interface; integrated circuit memory cells; and a processing device. The processing device is configured to execute firmware to perform operations requested by commands received via the host interface and maintenance operations identified by the processing device independent of commands received via the host interface. The host computing device is coupled to the host interface to provide commands with addresses to access the integrated circuit memory cells according to the address. In response to a request, the host computing device is configured to reduce, to below a threshold, a rate of transmitting to the host interface commands to access integrated circuit memory cells; and power up the data storage device to cause the data storage device to perform the maintenance operations.
Abstract: Autonomous memory access (AMA) controllers and related systems, methods, and devices are disclosed. An AMA controller includes waveform circuitry configured to autonomously retrieve waveform data stored in a memory device and pre-process the waveform data without intervention from a processor. The AMA controller is configured to provide the pre-processed waveform data to one or more peripheral devices.
Abstract: A processor system comprises a memory having at least two interleaved memory banks, at least two multiplexers which are respectively coupled to one of the at least two interleaved memory banks via a respective memory bank bus, a first processor or processor core which is coupled to first multiplexer inputs of the at least two multiplexers via a first data bus, a second processor or processor core which is coupled to second multiplexer inputs of the at least two multiplexers via a second data bus, and at least two queue buffers which are arranged in the second data bus between the second processor or processor core and the second multiplexer inputs of the at least two multiplexers. The first processor or processor core is configured to have read access or write access only to one of the at least two interleaved memory banks within one clock cycle.
Abstract: A set of memory commands associated with one or more memory dies of a memory device are communicated via a first portion of an interface to the memory device. Communication of a set of data bursts corresponding to the set of memory commands to the one or more memory dies via a second portion of the interface is caused, wherein one or more of the set of memory commands is communicated via the first interface concurrently with one or more of the set of data bursts.
Abstract: Write operation may be persistently recorded in a log using PDESC (page descriptor)-PB (page block) pairs. The PDESC-PB pairs of the log may be flushed. Flushing the log may include: determining a working set of PDESC-PB pairs; partitioning the working set into buckets by mapping each PDESC-PB pair of the working set to a bucket using a function; flushing a portion of the PDESC-PB pairs of a first bucket of the working set; updating, at a point in time, a first BHFS (bucket highest flushed sequence ID) value for the first bucket, wherein the first BHFS denotes a first sequence ID and each sequence ID associated with a PDESC-PB pair of the portion flushed prior to the point in time is less than the first sequence ID; and reclaiming PBs of the portion. As part of recovery processing, BHFSs for the buckets may be used to detect invalid PDESCs.
Type:
Grant
Filed:
March 4, 2021
Date of Patent:
May 24, 2022
Assignee:
EMC IP Holding Company LLC
Inventors:
Vladimir Shveidel, Oran Baruch, Ronen Gazit
Abstract: An interface couples a controller to a physical layer (PHY) block, where the interface includes a set of data pins comprising transmit data pins to send data to the PHY block and receive data pins to receive data from the PHY block. The interface further includes a particular set of pins to implement a message bus interface, where the controller is to send a write command to the PHY block over the message bus interface to write a value to at least one particular bit of a PHY message bus register, bits of the PHY message bus register are mapped to a set of control and status signals, and the particular bit is mapped to a recalibration request signal to request that the PHY block perform a recalibration.
Type:
Grant
Filed:
July 10, 2020
Date of Patent:
May 10, 2022
Assignee:
Intel Corporation
Inventors:
Michelle C. Jen, Minxi Gao, Debendra Das Sharma, Fulvio Spagna, Bruce A. Tennant, Noam Dolev Geldbard
Abstract: A data processing system includes a host processor, a processor suitable for processing a task instructed by the host processor, a memory, shared by the host processor and the processor, that is suitable for storing data processed by the host processor and the processor, respectively, and a memory controller suitable for checking whether a stored data processed by the host processor and the processor are reused, and for sorting and managing the stored data as a first data and a second data based on the check result.
Abstract: The present invention relates to a booting disk including a master boot record (MBR) stored in a first region and a globally unique identifier (GUID) partition table (GPT) stored in a second region, wherein the booting disk has a hybrid MBR partition structure in which an MBR partition used by the MBR and a GPT partition used by the GPT are mixed. The GPT partition may include an operating system (OS) partition configured to store an OS and a GPT storage partition configured to store data, the MBR partition may include a GPT protective partition configured to protect the GPT partition and an MBR storage partition configured to store data, the GPT storage partition and the MBR storage partition may be the same the same starting address and size, a starting address of the GPT protective partition may be a logical block addressing (LBA) #1 of the booting disk, and a partition size thereof is zero.
Abstract: A storage device configured for connection with a host includes; an output unit configured to provide at least one storage device operating state indication to a user, an input unit configured to accept at least one user input, a main memory configured to temporarily store data received from the host, a storage configured to store the data in non-volatile memory space, and a controller configured to execute a backup operation in response to a user input accepted by the input unit, transfer data from the main memory to the storage during the backup operation, and provide a storage device operating state indication through the output unit during execution of the backup operation.
Abstract: A method for changing over a general-purpose OS display for an information processing apparatus to a dedicated display screen includes accessing a setup procedure describing setup processing and at least an account generating process for generating user account information for a general-purpose operating system (“OS”). The method includes accessing changeover information for changing over a general-purpose OS display screen for the information processing apparatus to a dedicated display screen, and in response to starting up the general-purpose OS for the first time, executing the setup processing including the user account generating process based on the setup procedure stored by the procedure storage unit, changing over the general-purpose OS display screen to the dedicated display screen based on the changeover information stored by the changeover information storage unit, and displaying the dedicated display screen. An apparatus and a program product perform the method.
Abstract: An address range mirroring system includes a plurality of processing subsystem/memory subsystem nodes that each include a respective processing subsystem coupled to a respective memory subsystem, an operating system provided by at least one of the plurality of processing subsystem/memory subsystem nodes, and a Basic Input/Output System (BIOS) that is coupled to the plurality of processing subsystem/memory subsystem nodes. The BIOS identifies an address range mirroring memory size that was provided by the operating system, and an address range mirroring node usage identification that was provided by the operating system. The BIOS then configures address range mirroring according to the address range mirroring memory size in the respective memory subsystem in each of a subset of the plurality of processing subsystem/memory subsystem nodes, with the subset of the plurality of processing subsystem/memory subsystem nodes based on the address range mirroring node usage identification.
Abstract: Data compression is performed on a storage system for which one or more host systems have direct access to data on the storage system. The storage system may compress the data for one or more logical storage units (LSUs) having data stored thereon, and may update compression metadata associated with the LSUs and/or the data portions thereof to reflect that the data is compressed. In response to a read request for a data portion received from a host application executing on the host system, compression metadata for the data portion may be accessed. If it is determined from the compression metadata that the data portion is compressed, the data compression metadata for the data portion may be further analyzed to determine how to decompress the data portion. The data portion may be retrieved and decompressed, and the decompressed data may be returned to the requesting application.
Type:
Grant
Filed:
January 15, 2020
Date of Patent:
April 5, 2022
Assignee:
EMC IP Holding Company LLC
Inventors:
Ian Wigmore, Gabriel Benhanokh, Arieh Don, Alesia A. Tringale