Abstract: There is provided determination of a beam configuration between a first radio transceiver device and a second radio transceiver device. The first radio transceiver device performs beam searching by transmitting a first sounding signal in all transmit beam configurations in a set of transmit beam configurations; and receiving, from the second radio transceiver device, a second sounding signal in all receive beam configurations in a set of receive beam configurations. The first radio transceiver device determines a beam configuration based on the receive beam configuration in the set of receive beam configurations in which the second sounding signal having best predetermined metric was received.
Type:
Grant
Filed:
June 17, 2014
Date of Patent:
February 6, 2024
Assignee:
Telefonaktiebolaget LM Ericsson (publ)
Inventors:
Niklas Andgart, Johan Nilsson, Andres Reial
Abstract: A data storage device includes a memory device and a controller coupled to the memory device. The controller and the memory device communicate using a plurality of flash channels, where each channel is mapped to one or more dies of the memory device. Each of the one or more dies of the memory device are associated with one or more strobes of a strobe cycle of a respective flash channel, where a die is provided power during a respective strobe. The controller is configured to, using a time division peak power management (TD-PPM) operation, change an association of a strobe from a first channel to a strobe of a second channel, which may adjust an amount of power provided to each of the channels and improve performance and latency of the data storage device.
Abstract: Methods, systems, and devices for read operations for regions of a memory device are described. In some examples, a memory device may include a first cache for storing mappings between logical addresses and physical addresses of the memory device, and a second cache for storing indices associated with entries removed from the first cache. The memory device may include a controller configured to load mappings to the first cache upon receiving read commands. When the first cache is full, and when the memory device receives a read command, the controller may remove an entry from the first cache and may store an index associated with the removed entry to the second cache. The controller may then transmit a mapping associated with the index to a host device for use in a HPB operation.
Type:
Grant
Filed:
June 27, 2022
Date of Patent:
January 30, 2024
Assignee:
Micron Technology, Inc.
Inventors:
Nicola Colella, Antonino Pollio, Hua Tan
Abstract: A covering includes an outer material, a plurality of conductive-fibers, and a fastening material. The outer material includes a front surface and a back surface and the conductive-fibers are disposed between the front surface and the back surface. The conductive-fibers are configured to receive a voltage that causes the conductive-fibers to repel and remove dust from the front surface of the outer material. The fastening material is coupled to the back surface of the outer material and facilitates releasably attaching the outer material to an article.
Abstract: Some embodiments provide a method for a neural network inference circuit that executes a neural network. The method loads a first set of inputs into an input buffer and computes a first dot product between the first set of inputs and a set of weights. The method shifts the first set of inputs in the buffer while loading a second set of inputs into the buffer such that a first subset of the first set of inputs is removed from the buffer, a second subset of the first set of inputs is moved to new locations in the buffer, and a second set of inputs are loaded into locations in the buffer vacated by the shifting. The method computes a second dot product between (i) the second set of inputs and the second subset of the first set of inputs and (ii) the set of weights.
Abstract: The data duplication system comprises a first storage device having a first data protection area for storing backup images of multiple generations of a first volume for data read/write by an external device. The first data protection area is inaccessible to the external device. A second storage device coupled to the first storage device. The first storage device creates a second volume for storing a backup image of a particular generation of the plurality of generations of backup images stored in the first data protection area. The second storage device creates a third volume for storing the copy data, and a virtual volume that is mapped to the second volume of the first storage device. The second storage stores the backup data of a specific generation stored in the second volume in the third volume via the virtual volume by forming a pair that copies the data in the virtual volume and the third volume.
Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. An integrated circuit may be configured to execute instructions with matrix operands and configured with: random access memory configured to store instructions executable by the Deep Learning Accelerator and store matrices of an Artificial Neural Network; a connection between the random access memory and the Deep Learning Accelerator; a first interface to a memory controller of a Central Processing Unit; and a second interface to a direct memory access controller. While the Deep Learning Accelerator is using the random access memory to process current input to the Artificial Neural Network in generating current output from the Artificial Neural Network, the direct memory access controller may concurrently load next input into the random access memory; and at the same time, the Central Processing Unit may concurrently retrieve prior output from the random access memory.
Abstract: A method of performing instructions in a computer processor architecture includes determining that a load instruction is being dispatched. Destination related data of the load instruction is written into a mapper of the architecture. A determination that a compare immediate instruction is being dispatched is made. A determination that a branch conditional instruction is being dispatched is made. The branch conditional instruction is configured to wait until the load instruction produces a result before the branch conditional instruction issues and executes. The branch conditional instruction skips waiting for a finish of the compare immediate instruction.
Type:
Grant
Filed:
August 26, 2021
Date of Patent:
January 30, 2024
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Nicholas R. Orzol, Mehul Patel, Dung Q. Nguyen, Brian D. Barrick, Richard J. Eickemeyer, John B Griswell, Jr., Balaram Sinharoy, Brian W. Thompto, Ophir Erez
Abstract: The present invention relates to a cap for covering an opening of a container, preferably the opening of a bottle. The present invention is characterized by a cork body having a cylindrical configuration and configured in two portions between which there is arranged an impermeable sheet element according to the longitudinal direction. The presence of an impermeable sheet allows using a conventional closure material preventing permeability in the longitudinal direction both to prevent the liquid from coming out of the container and to prevent bacteria or contaminants from the outside from entering.
Abstract: The described embodiments set forth techniques for providing a backup progress estimate for a backup of a source file system volume (FSV). The techniques involve determining, for the source FSV, a backup size during performance of backup operations. The operations can include determining the backup size based on a number of files on the source FSV. Additionally, the operations can include copying files of the source FSV and/or propagating corresponding files of a destination FSV to a location of the backup of the source FSV on a destination storage device and updating one or more metrics using a number of files and/or a number of bytes copied and/or propagated to the backup. In this manner, a progress indication for the backup may be determined based on the one or more metrics responsive to files and/or directories of the source file system volume being stored on a destination storage device.
Type:
Grant
Filed:
December 21, 2021
Date of Patent:
January 30, 2024
Assignee:
Apple Inc.
Inventors:
Robert M. Cadwallader, Christopher A. Wolf
Abstract: A pressurized gas accumulator has a hollow body which extends along a longitudinal axis and at least one connection piece. The hollow body has at least one layer of a weave structure with a plurality of warp threads running next to one another and a weft thread woven with the warp threads and oriented perpendicular thereto. The warp threads are oriented essentially parallel or essentially perpendicular to the longitudinal axis of the hollow body. A method for producing a pressurised gas accumulator and to a device for carrying out said method is also provided.
Abstract: Methods, systems, and devices for illegal operation reaction are described. A memory device may receive one or more commands to perform one or more respective access operations on an array of memory cells. A first circuit of the memory device may determine that the one or more commands would violate one or more thresholds associated with operation of the memory device, such as a timing threshold. In some cases, the first circuit may compare the one or more commands to the one or more patterns of commands stored at the memory device. A second circuit of the memory device may erase one or more memory cells of the memory device based on determining that the one or more thresholds associated with operation of the memory device would be violated, based on comparing the set of commands to the one or more patterns, or a combination thereof.
Abstract: A memory device includes a memory array configured with a plurality of memory planes, and control logic, operatively coupled with the memory array. The control logic receives, from a requestor, a plurality of cache read commands requesting first data from the memory array spread across the plurality of memory planes and receives, from the requestor, a cache read context switch command and a snap read command requesting second data from one of the plurality of memory planes of the memory array. Responsive to receiving the cache read context switch command, the control logic suspends processing of the plurality of cache read commands and processes the snap read command to read the second data from the memory array and return the second data to the requestor.
Type:
Grant
Filed:
April 22, 2021
Date of Patent:
January 30, 2024
Assignee:
Micron Technology, Inc.
Inventors:
Giuseppe D'Eliseo, Anna Scalesse, Umberto Siciliani, Carminantonio Manganelli
Abstract: Embodiments of the invention are directed to systems, methods and computer program products structured for dynamic management of stored cache data based on predictive usage information. The invention is structured for proactive alleviation of obsolete data, dynamic pre-population and fetching of cached data based on determining actions preceding initiation of activities. Specifically, the invention is configured to detect, via a proactive processor application, a first access event via a first network device associated with a first communication channel at a first time interval, such that the first access event is detected prior to initiation of a first technology activity event by the user. The invention is also structured to populate the first adapted hierarchical cache data object for use at a technology application associated with the first network device prior to the initiation of the first technology activity event by the user.