Addressing Cache Memories Patents (Class 711/3)
  • Patent number: 11520705
    Abstract: A multicore processing environment (MCPE) is disclosed. In embodiments, the MCPE includes multiple processing cores hosting multiple user applications configured for simultaneous execution. The cores and user applications share system resources including main memory and input/output (I/O) domains, each I/O domain including multiple I/O devices capable of requesting inbound access to main memory through an I/O memory management unit (IOMMU). For example, the IOMMU cache associates unique cache tags to each I/O device based on device identifiers or settings determined by the system registers, caching the configuration data for each I/O device under the appropriate cache tag. When each I/O device requests main memory access, the IOMMU cache refers to the appropriate configuration data under the corresponding unique cache tag. This prevents contention in the IOMMU cache caused by one device evicting the cache entry of another, minimizing interference channels by reducing the need for main memory access.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: December 6, 2022
    Assignee: Rockwell Collins, Inc.
    Inventors: Carl J. Henning, David J. Radack
  • Patent number: 11516001
    Abstract: A method for conveying auditable information regarding provenance of a product that is cryptographically accurate while retaining complete anonymity of product and participant on a blockchain includes: receiving a product identifier; generating a digital token by applying a hashing algorithm to the product identifier; generating an entry value by applying the hashing algorithm to a combination of an event identifier and the digital token; generating a digital signature by digitally signing a data package using a private key of a cryptographic key pair, where the data package includes at least a blockchain address, the event identifier, and the digital token; and transmitting the blockchain address, the digital signature, and the entry value to a node in a blockchain network.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: November 29, 2022
    Assignee: MASTERCARD INTERNATIONAL INCORPORATED
    Inventors: Steven C. Davis, Rob Byrne, Robert Collins, Leandro Nunes Da Silva Carvalho, Deborah Eleanor Barta
  • Patent number: 11513740
    Abstract: A hybrid memory system provides rapid, persistent byte-addressable and block-addressable memory access to a host computer system by providing direct access to a both a volatile byte-addressable memory and a volatile block-addressable memory via the same parallel memory interface. The hybrid memory system also has at least a non-volatile block-addressable memory that allows the system to persist data even through a power-loss state. The hybrid memory system can copy and move data between any of the memories using local memory controllers to free up host system resources for other tasks.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: November 29, 2022
    Assignee: EXECUTIVE ADVISORY FIRM LLC
    Inventors: Mike Hossein Amidi, Fariborz Frankie Roohparvar
  • Patent number: 11467965
    Abstract: A PIM device includes a plurality of first storage regions, a second storage region, and a column control circuit. The second storage region is coupled to each of the plurality of first storage regions through a data transmission line. The column control circuit generates a memory read control signal for reading data stored in an initially selected storage region of the plurality of first storage regions and a buffer write control signal for writing the data read from the initially selected storage region to the second storage region. The column control circuit generates a global buffer read control signal for reading the data written to the second storage region and a memory write control signal for writing the data read from the second storage region to a subsequently selected storage region of the plurality of first storage regions.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: October 11, 2022
    Assignee: SK hynix Inc.
    Inventor: Choung Ki Song
  • Patent number: 11455110
    Abstract: Embodiments of the present invention provide concepts for handling a handover of ownership of data from a source to a referrer in a data deduplication environment. By performing a handover of the ownership of the data from the source to the referrer, the number of processes required to access the data may be reduced and so the performance of the system may be improved. The identification of a source for performing the handover on may be performed by way of a volatile cache.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: September 27, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ben Sasson, Paul Nicholas Cashman, Dominic Tomkins, Florent C. Rostagni
  • Patent number: 11442866
    Abstract: A device (e.g., an application-specific integrated circuit chip) includes a memory module processing unit and an interface. The memory module processing unit is configured to receive an instruction to obtain values stored in one or more memory components and process the obtained values to return a processed result. The memory module processing unit is also configured to store the obtained values in a cache based on one or more criteria. The memory module processing unit is configured to be included on a computer memory module configured to be installed in a computer system. The interface is configured to communicate with the one or more memory components included on the computer memory module.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: September 13, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Liu Ke, Xuan Zhang, Udit Gupta, Carole-Jean Wu, Mark David Hempstead, Brandon Reagen, Hsien-Hsin Sean Lee
  • Patent number: 11409659
    Abstract: A device includes a memory controller and a cache memory coupled to the memory controller. The cache memory has a first set of cache lines associated with a first memory block and comprising a first plurality of cache storage locations, as well as a second set of cache lines associated with a second memory block and comprising a second plurality of cache storage locations. A first location of the second plurality of cache storage locations comprises cache tag data for both the first set of cache lines and the second set of cache lines.
    Type: Grant
    Filed: April 2, 2021
    Date of Patent: August 9, 2022
    Assignee: Rambus Inc.
    Inventors: Michael Raymond Miller, Dennis Doidge, Collins Williams
  • Patent number: 11403231
    Abstract: Hash-based application programming interface (API) importing can be prevented by allocating a name page and a guard page in memory. The name page and the guard page being associated with (i) an address of names array, (ii) an address of name ordinal array, and (iii) an address of functions array that are all generated by an operating system upon initiation of an application. The name page can then be filled with valid non-zero characters. Thereafter, protections on the guard page can be changed to no access. An entry is inserted into the address of names array pointing to a relative virtual address corresponding to anywhere within the name page. Access to the guard page causes the requesting application to terminate. Related apparatus, systems, techniques and articles are also described.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: August 2, 2022
    Assignee: Cylance Inc.
    Inventor: Jeffrey Tang
  • Patent number: 11361281
    Abstract: Methods and systems for expense management, comprising: retrieving at least one electronic feed of charges for multiple expense receipt records directly from at least one lodging and/or transportation vendor, the at least one feed of charges including computer-readable electronic transaction data; detecting that at least one expense receipt record from the multiple expense receipt records from the at least one feed of charges is comprised of two or more line items; mapping the two or more line items to at least one transportation and/or lodging good and/or service that is chargeable to at least one account identifier, the mapping utilizing vendor expense codes and/or keyword searches; and pre-populating the at least one transportation and/or lodging good and/or service mapped to each of the two or more line items from the at least one expense receipt record in at least one expense report in at least one expense management system as two or more expense itemizations.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: June 14, 2022
    Assignee: SAP SE
    Inventors: Michael Fredericks, Joseph Dunnick, Valery Gorodnichev, Jeannine Armstrong
  • Patent number: 11347650
    Abstract: A method includes, for each data value in a set of one or more data values, determining a boundary between a high order portion of the data value and a low order portion of the data value, storing the low order portion at a first memory location utilizing a low data fidelity storage scheme, and storing the high order portion at a second memory location utilizing a high data fidelity storage scheme for recording data at a higher data fidelity than the low data fidelity storage scheme.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: May 31, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: David A. Roberts, Elliot H. Mednick
  • Patent number: 11327768
    Abstract: An arithmetic processing apparatus includes an arithmetic circuit configured to perform an arithmetic operation on data having a first data width and perform an instruction in parallel on each element of data having a second data width, and a cache memory configured to store data, wherein the cache memory includes a tag circuit storing tags for respective ways, a data circuit storing data for the respective ways, a determination circuit that determines a type of an instruction with respect to whether data accessed by the instruction has the first data width or the second data width, and a control circuit that performs either a first pipeline operation where the tag circuit and the data circuit are accessed in parallel or a second pipeline operation where the data circuit is accessed in accordance with a tag result after accessing the tag circuit, based on a result determined by the determination circuit.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: May 10, 2022
    Assignee: FUJITSU LIMITED
    Inventor: Noriko Takagi
  • Patent number: 11314654
    Abstract: A mechanism is described for facilitating optimization of cache associated with graphics processors at computing devices. A method of embodiments, as described herein, includes introducing coloring bits to contents of a cache associated with a processor including a graphics processor, wherein the coloring bits to represent a signal identifying one or more caches available for use, while avoiding explicit invalidations and flushes.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: April 26, 2022
    Assignee: Intel Corporation
    Inventors: Altug Koker, Balaji Vembu, Joydeep Ray, Abhishek R. Appu
  • Patent number: 11314647
    Abstract: Methods and systems for managing synonyms in VIPT caches are disclosed. A method includes tracking lines of a copied cache using a directory, examining a specified bit of a virtual address that is associated with a load request and determining its status and making an entry in one of a plurality of parts of the directory based on the status of the specified bit of the virtual address that is examined. The method further includes updating one of, and invalidating the other of, a cache line that is associated with the virtual address that is stored in a first index of the copied cache, and a cache line that is associated with a synonym of the virtual address that is stored at a second index of the copied cache, upon receiving a request to update a physical address associated with the virtual address.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: April 26, 2022
    Assignee: INTEL CORPORATION
    Inventor: Karthikeyan Avudaiyappan
  • Patent number: 11314595
    Abstract: A RAID controller periodically collects an indication of a current compression ratio achieved by each of a plurality of storage devices within the RAID. The RAID controller determines a placement of data and the parity information within at least one of the plurality of storage devices according to at least one of a plurality of factors associated with the current compression ratio. The RAID controller writes the data and the parity information to the at least one of the plurality of storage devices according to the determined placement.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: April 26, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Roman Alexander Pletka, Sasa Tomic, Timothy Fisher, Nikolaos Papandreou, Nikolas Ioannou, Aaron Fry
  • Patent number: 11303550
    Abstract: Described embodiments provide systems and methods for monitoring server utilization and reallocating resources using upper bound values. A device can determine a value indicative of an upper bound of a processing load of a server using data points detected for the processing load over a first range of time. The upper bound can correspond to a percentage of the processing load during the first range of time. The device can monitor, using the value, the processing load of the server over a second range of time. A determination can be made whether the value of the processing load is greater than a threshold during the second range of time. The device can generate an alert for the device responsive to a comparison of the value of the processing load to the threshold.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: April 12, 2022
    Assignee: Citrix Systems, Inc.
    Inventors: Andreas Varnavas, Satyendra Tiwari, Manikam Muthiah, Nikolaos Georgakopoulos
  • Patent number: 11287993
    Abstract: Techniques involve: determining corresponding valid metadata rates of a plurality of metadata blocks stored in a metadata storage area of a storage system, the valid metadata rate of each metadata block indicating a ratio of valid metadata in the metadata block to all metadata in the metadata block; selecting a predetermined number of metadata blocks having a valid metadata rate lower than a first valid metadata rate threshold from the plurality of metadata blocks; storing valid metadata in the predetermined number of metadata blocks into at least one metadata block following the plurality of metadata blocks in the metadata storage area; and making the valid metadata in the predetermined number of metadata blocks invalid. Accordingly, such techniques can improve the efficiency of the storage system.
    Type: Grant
    Filed: January 21, 2020
    Date of Patent: March 29, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Shaoqin Gong, Jibing Dong, Hongpo Gao, Jianbin Kang, Baote Zhuo
  • Patent number: 11275716
    Abstract: An approach is provided for tracking a single process instance flow. A request is received in a first system of a multi-system environment. Log files are pulled from systems with which the request interacts. Log entries are captured for the request. The log files are combined and flattened into a chronological log. A predictive model is built from an order of entries in the chronological log. Correlation keys in the entries of the chronological log are identified. Logs specifying processing of multiple ongoing requests are aggregated. A process instance of interest to a user is received. Instance specific log files are generated by deflattening the aggregated logs and by using a pattern detection algorithm that uses the predictive model and an alternate identifier algorithm that uses the correlation keys. One of the generated instance specific log files specifies a flow of the process instance of interest.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: March 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: Zachary A. Silverstein, Kelly Camus, Tiberiu Suto, Andrew R. Jones
  • Patent number: 11243773
    Abstract: A computer system, includes a store queue that holds store entries and a load queue that holds load entries sleeping on a store entry. A processor detects a store drain merge operation call and generates a pair of store tags comprising a first store tag corresponding to a first store entry to be drained and a second store tag corresponding to a second store entry to be drained. The processor determines the pair of store tags an even-type store tag or an odd-type store tag. The processor disables the odd store tag included in the even-type store tag pair when detecting the even-type store tag pair, and wakes up a first load entry dependent on the even store tag and a second load entry dependent on the odd store tag based on the even store tag included in the even-type store tag pair while the odd store tag is disabled.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: February 8, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bryan Lloyd, David Campbell, Brian Chen, Robert A. Cordes
  • Patent number: 11237758
    Abstract: A method and apparatus of wear leveling control for storage class memory are disclosed. According to the present invention, where current data to be written to a nonvolatile memory corresponds to an address cache hit is determined. If the current data to be written corresponds to an address cache hit, the current data are written to a designated location in the nonvolatile memory different from a destined location in the nonvolatile memory. If the current data to be written corresponds to an address cache miss, the current data are written to the destined location in the nonvolatile memory. In another embodiment, the wear leveling control technique also includes address rotation process to achieve long-term wear leveling as well.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: February 1, 2022
    Assignee: Wolley Inc.
    Inventor: Chuen-Shen Bernard Shung
  • Patent number: 11232039
    Abstract: Systems, apparatuses, and methods for efficiently performing memory accesses in a computing system are disclosed. A computing system includes one or more clients, a communication fabric and a last-level cache implemented with low latency, high bandwidth memory. The cache controller for the last-level cache determines a range of addresses corresponding to a first region of system memory with a copy of data stored in a second region of the last-level cache. The cache controller sends a selected memory access request to system memory when the cache controller determines a request address of the memory access request is not within the range of addresses. The cache controller services the selected memory request by accessing data from the last-level cache when the cache controller determines the request address is within the range of addresses.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: January 25, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Gabriel H. Loh
  • Patent number: 11188475
    Abstract: A technique is provided for managing caches in a cache hierarchy. An apparatus has processing circuitry for performing operations and a plurality of caches for storing data for reference by the processing circuitry when performing the operations. The plurality of caches form a cache hierarchy including a given cache at a given hierarchical level and a further cache at a higher hierarchical level. The given cache is a set associative cache having a plurality of cache ways, and the given cache and the further cache are arranged such that the further cache stores a subset of the data in the given cache. In response to an allocation event causing data for a given memory address to be stored in the further cache, the given cache issues a way indication to the further cache identifying which cache way in the given cache the data for the given memory address is stored in.
    Type: Grant
    Filed: October 2, 2020
    Date of Patent: November 30, 2021
    Assignee: Arm Limited
    Inventors: Joseph Michael Pusdesris, Balaji Vijayan
  • Patent number: 11176043
    Abstract: A method for using a distributed memory device in a memory augmented neural network system includes receiving, by a controller, an input query to access data stored in the distributed memory device, the distributed memory device comprising a plurality of memory banks. The method further includes determining, by the controller, a memory bank selector that identifies a memory bank from the distributed memory device for memory access, wherein the memory bank selector is determined based on a type of workload associated with the input query. The method further includes computing, by the controller and by using content based access, a memory address in the identified memory bank. The method further includes generating, by the controller, an output in response to the input query by accessing the memory address.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: November 16, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ahmet Serkan Ozcan, Tomasz Kornuta, Carl Radens, Nicolas Antoine
  • Patent number: 11163691
    Abstract: Examples of the present disclosure relate to an apparatus comprising processing circuitry to perform data processing operations, storage circuitry to store data for access by the processing circuitry, address translation circuitry to maintain address translation data for translating virtual memory addresses into corresponding physical memory addresses, and prefetch circuitry. The prefetch circuitry is arranged to prefetch first data into the storage circuitry in anticipation of the first data being required for performing the data processing operations. The prefetching comprises, based on a prediction scheme, predicting a first virtual memory address associated with the first data, accessing the address translation circuitry to determine a first physical memory address corresponding to the first virtual memory address, and retrieving the first data based on the first physical memory address corresponding to the first virtual memory address.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: November 2, 2021
    Assignee: ARM LIMITED
    Inventors: Stefano Ghiggini, Natalya Bondarenko, Damien Guillaume Pierre Payet, Lucas Garcia
  • Patent number: 11113006
    Abstract: A memory sub-system configured to dynamically generate a media layout to avoid media access collisions in concurrent streams. The memory sub-system can identify plurality of media units that are available to write data concurrently, select commands from the plurality of streams for concurrent execution in the available media units, generate and store a portion of a media layout dynamically in response to the commands being selected for concurrent execution in the plurality of media units, and executing the selected commands concurrently by storing data into the memory units according to physical addresses to which logical addresses used in the selected commands are mapped in the dynamically generated portion of the media layout.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: September 7, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Sanjay Subbarao
  • Patent number: 11086782
    Abstract: A method of logging process data in a PLC controlled equipment is disclosed. Sections of a PLC application code comprise tasks configured to execute program functions at specific execution rates. Each of the tasks comprise program functions having dedicated memory areas assigned as tags, and each data entry of the process data comprises a tag value and an associated process value. The method comprises receiving process data from the PLC application code, assigning process values to threads of a thread pool, and receiving, for each of the threads, in a respective data table associated with each of the threads, the tag- and process values of the received process data, and determining a hash code for each of the tags according to a hash function of the respective data table for arranging the tag values and the associated process values in the respective data table according to said hash code.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: August 10, 2021
    Assignee: TETRA LAVAL HOLDINGS & FINANCE S.A.
    Inventors: Lukas Karlsson, Ashraf Zarur
  • Patent number: 11074183
    Abstract: A method and apparatus for read wearing control for storage class memory (SCM) are disclosed. The read data control apparatus, located between a host and the SCM subsystem, comprises a read data cache, an address cache and an SCM controller. The address cache stores pointers pointing to data stored in logging area(s) located in the SCM. For a read request, the read wearing control determines whether the read request is a read data cache hit, an address cache hit or neither (i.e., read data cache miss and address cache miss). For the read data cache hit, the requested data is returned from the read data cache. For the address cache hit, the requested data is returned from the logging area(s) and the read data becomes a candidate to be placed in the read data cache. For read data cache and address cache misses, the requested data is returned from SCM.
    Type: Grant
    Filed: December 28, 2019
    Date of Patent: July 27, 2021
    Assignee: Wolley Inc.
    Inventors: Yu-Ming Chang, Tai-Chun Kuo, Chuen-Shen Bernard Shung
  • Patent number: 11064170
    Abstract: A laser projection device, including at least one laser diode for generating at least one laser beam. The laser projection device includes at least one laser driver for operating the at least one laser diode; at least one image processing circuit to supply control data for the at least one laser driver; at least one central processor and/or driver; and at least one control and/or regulating unit, which is configured to block the at least one laser driver in response to an occurrence of at least one fault. The at least one control and/or regulating unit includes at least one nonvolatile memory, in which at least a minimum set of monitoring functions is stored.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: July 13, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Alexander Ehlert, Christoph Puttmann, Christoph Delfs
  • Patent number: 11048637
    Abstract: A high-frequency and low-power L1 cache and associated access technique. The method may include inspecting a virtual address of an L1 data cache load instruction, and indexing into a row and a column of a way predictor table using metadata and a virtual address associated with the load instruction. The method may include matching information stored at the row and the column of the way predictor table to a location of a cache line. The method may include predicting the location of the cache line within the L1 data cache based on the information match. A hierarchy of way predictor tables may be used, with higher level way predictor tables refreshing smaller lower level way predictor tables. The way predictor tables may be trained to make better predictions over time. Only selected circuit macros need to be enabled based on the predictions, thereby saving power.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: June 29, 2021
    Inventor: Karthik Sundaram
  • Patent number: 11037630
    Abstract: Devices and techniques for NAND temperature data management are disclosed herein. A command to write data to a NAND component in the NAND device is received at a NAND controller of the NAND device. A temperature corresponding to the NAND component is obtained in response to receiving the command. The command is then executed to write data to the NAND component and to write a representation of the temperature. The data is written to a user portion and the representation of the temperature is written to a management portion that is accessible only to the controller and segregated from the user portion.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: June 15, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Kishore Kumar Muchherla, Sampath Ratnam, Preston Allen Thomson, Harish Reddy Singidi, Jung Sheng Hoei, Peter Sean Feeley, Jianmin Huang
  • Patent number: 10977187
    Abstract: A method for performing access management in a memory device, the associated memory device and the controller thereof, and the associated electronic device are provided. The method may include: receiving a host command and a logical address from a host device; performing at least one checking operation to obtain at least one checking result, for determining whether to load a logical-to-physical (L2P) table from the NV memory to a random access memory (RAM) of the memory device, wherein the L2P table includes address mapping information for accessing the target data, and performing the at least one checking operation to obtain at least one checking result includes checking whether a first L2P-table index pointing toward the L2P table and a second L2P-table index sent from the host device are equivalent to each other; and reading the target data from the NV memory, and sending the target data to the host device.
    Type: Grant
    Filed: January 2, 2018
    Date of Patent: April 13, 2021
    Assignee: Silicon Motion, Inc.
    Inventors: Jie-Hao Lee, Cheng-Yu Yu
  • Patent number: 10910079
    Abstract: A programming device (110) arranged to obtain and store a random bit string in a memory device (100), the memory device (100) comprising multiple one-time programmable memory cells (122), a memory cell having a programmed state and a not-programmed state, the memory cell being one-time programmable by changing the state from the not-programmed state to the programmed state through application of an electric programming energy to the memory cell.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: February 2, 2021
    Assignee: INTRINSIC ID B.V.
    Inventors: Pim Theo Tuyls, Geert Jan Schrijen, Vincent Van Der Leest
  • Patent number: 10902129
    Abstract: A method, an apparatus, and a storage medium for detecting vulnerabilities in software to protect a computer system from security and compliance breaches are provided. The method includes providing a ruleset code declaring programming interfaces of a target framework and including rules that define an admissible execution context when invoking the programming interfaces, providing a source code to be scanned for vulnerabilities; compiling the source code into a first execution code having additional instructions inserted to facilitate tracking of an actual execution context of the source code, compiling the ruleset code into a second execution code that can be executed together with the first execution code, executing the first execution code within an virtual machine and passing calls of the programming interfaces to the second execution code, and detecting a software vulnerability when the actual execution context disagrees with the admissible execution context.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: January 26, 2021
    Assignee: Virtual Forge GmbH
    Inventors: Hans-Christian Esperer, Yun Ding, Thomas Kastner, Markus Schumacher
  • Patent number: 10846232
    Abstract: A mechanism is described for facilitating optimization of cache associated with graphics processors at computing devices. A method of embodiments, as described herein, includes introducing coloring bits to contents of a cache associated with a processor including a graphics processor, wherein the coloring bits to represent a signal identifying one or more caches available for use, while avoiding explicit invalidations and flushes.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: November 24, 2020
    Assignee: INTEL CORPORATION
    Inventors: Altug Koker, Balaji Vembu, Joydeep Ray, Abhishek R. Appu
  • Patent number: 10797766
    Abstract: Systems, methods, computer program products, and devices reduce computational processing performed by at least one computer processor that computes an eigensystem from a first data set; computes updated eigenvalues that approximate an eigensystem of at least a second data set based on the eigensystem of the first data set; and evaluates a plurality of features in each of the first and at least second data sets using a cost function; wherein reducing the computational processing of the at least one computer processor is achieved by at least one of selecting the cost function to comprise fewer than the total number of eigenvalues and employing a coarse approximation of the eigenvalues to de-select at least one of the data sets. This is especially useful for learning and/or online processing in an artificial neural network.
    Type: Grant
    Filed: February 2, 2020
    Date of Patent: October 6, 2020
    Assignee: Genghiscomm Holdings, LLC
    Inventor: Steve Shattil
  • Patent number: 10769013
    Abstract: Various embodiments provide for caching of error checking data for memory having inline storage configurations for primary data and error checking data for the primary data. In particular, various embodiments described herein provide for error checking data caching and cancellation of error checking data read commands for memory having inline storage configurations for primary data and associated error checking data. Additionally, various embodiments described herein provide for combining/canceling of error checking data write commands for memory having inline storage configurations for primary data and associated error checking data.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: September 8, 2020
    Assignee: Cadence Design Systems, Inc.
    Inventors: John M. MacLaren, Landon Laws, Carl Nels Olson, Thomas J. Shepherd
  • Patent number: 10768858
    Abstract: According to one embodiment, a memory system receives, from a host, a write request including a first identifier associated with one write destination block and storage location information indicating a location in a write buffer on a memory of the host in which first data to be written is stored. When the first data is to be written to a nonvolatile memory, the memory system obtains the first data from the write buffer by transmitting a transfer request including the storage location information to the host, transfers the first data to the nonvolatile memory, and writes the first data to the one write destination block.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: September 8, 2020
    Assignee: Toshiba Memory Corporation
    Inventors: Shinichi Kanno, Hideki Yoshida
  • Patent number: 10754783
    Abstract: Examples include techniques to manage cache resource allocations associated with one or more cache class of service (CLOS) assignments for a processor cache. Examples include flushing portions of an allocated cache resource responsive to reassignments of CLOS.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: August 25, 2020
    Assignee: Intel Corporation
    Inventors: Tomasz Kantecki, John Browne, Chris Macnamara, Timothy Verrall, Marcel Cornu, Eoin Walsh, Andrew J. Herdrich
  • Patent number: 10642685
    Abstract: A cache memory has cache memory circuitry comprising a nonvolatile memory cell to store at least a portion of a data which is stored or is to be stored in a lower-level memory than the cache memory circuitry, a first redundancy code storage comprising a nonvolatile memory cell capable of storing a redundancy code of the data stored in the cache memory circuitry, and a second redundancy code storage comprising a volatile memory cell capable of storing the redundancy code.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: May 5, 2020
    Assignee: Kioxia Corporation
    Inventors: Kazutaka Ikegami, Shinobu Fujita, Hiroki Noguchi
  • Patent number: 10628323
    Abstract: An operating method for a data storage device includes providing a nonvolatile memory device including a plurality of pages; segmenting an address map which maps a logical address provided from a host device and a physical address of the nonvolatile memory device, by a plurality of address map segments according to a segment size that is set depending on a quality of service time allowed to process a request of the host device and an unprocessed workload; and flushing at least one of the address map segments in the nonvolatile memory device after processing the unprocessed workload.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: April 21, 2020
    Assignee: SK hynix Inc.
    Inventors: Min Hwan Moon, Duck Hoi Koo, Soong Sun Shin, Ji Hoon Lee
  • Patent number: 10620958
    Abstract: Systems, apparatuses, and methods for efficiently reducing power consumption in a crossbar of a computing system are disclosed. A data transfer crossbar uses a first interface for receiving data fetched from a data storage device that is partitioned into multiple banks. The crossbar uses a second interface for sending data fetched from the multiple banks to multiple compute units. Logic in the crossbar selects data from a most recent fetch operation for a given compute unit when the logic determines the given compute unit is an inactive compute unit for which no data is being fetched. The logic sends via the second interface the selected data for the given compute unit. Therefore, when the given compute unit is inactive, the data lines for the fetched data do not transition for each inactive clock cycle after the most recent active clock cycle.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: April 14, 2020
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Xianwen Cheng
  • Patent number: 10613796
    Abstract: According to one embodiment, a memory system receives from a host a first write request including a first block identifier designating a first write destination block to which first write data is to be written. The memory system acquires the first write data from a write buffer temporarily holding write data corresponding to each of the write requests, and writes the first write data to a write destination page in the first write destination block. The memory system releases a region in the write buffer, storing data which is made readable from the first write destination block by writing the first write data to the write destination page. The data made readable is a data of a page in the first write destination block preceding the write destination page.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: April 7, 2020
    Assignee: Toshiba Memory Corporation
    Inventors: Shinichi Kanno, Hideki Yoshida, Naoki Esaka
  • Patent number: 10599333
    Abstract: A storage device includes a nonvolatile semiconductor memory device, and a controller configured to access the nonvolatile semiconductor memory device. When the controller receives a write command including a logical address, the controller determines a physical location of the memory device in which data are written and stores a mapping from the logical address to the physical location. When the controller receives a write command without a logical address, the controller determines a physical location of the memory device in which data are written and returns the physical location.
    Type: Grant
    Filed: August 31, 2016
    Date of Patent: March 24, 2020
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventor: Daisuke Hashimoto
  • Patent number: 10514855
    Abstract: A memory access request including an address is received from a memory controller of an application server. One of a plurality of paths to the NVRAM is selected based on the address from the memory access request.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: December 24, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Douglas L Voigt
  • Patent number: 10503502
    Abstract: A processor includes a decode unit to decode an instruction indicating a source packed data operand having source data elements and indicating a destination storage location. Each of the source data elements has a source data element value and a source data element position. An execution unit, in response to the instruction, stores a result packed data operand having result data elements each having a result data element value and a result data element position. Each result data element value is one of: (1) equal to a source data element position of a source data element, closest to one end of the source operand, having a source data element value equal to the result data element position of the result data element; and (2) a replacement value, when no source data element has a source data element value equal to the result data element position of the result data element.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: December 10, 2019
    Assignee: INTEL CORPORATION
    Inventors: Christopher J. Hughes, Jong Soo Park
  • Patent number: 10452397
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to determine a first number of threads to be scheduled for each context of a plurality of contexts in a multi-context processing system, allocate a second number of streaming multiprocessors (SMs) to the respective plurality of contexts, and dispatch threads from the plurality of contexts only to the streaming multiprocessor(s) allocated to the respective plurality of contexts. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: October 22, 2019
    Assignee: INTEL CORPORATION
    Inventors: Joydeep Ray, Altug Koker, Balaji Vembu, Abhishek R. Appu, Kamal Sinha, Prasoonkumar Surti, Kiran C. Veernapu
  • Patent number: 10423539
    Abstract: What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated and an initial origin address of a translation table of the hierarchy of translation tables are obtained. Based on the origin address, a segment table entry is obtained which contains a format control field and an access validity field. If the format control and access validity are enabled, the segment table entry further contains an access control and fetch protection fields, and a segment-frame absolute address. Store operations to the block of data are permitted only if the access control field matches a program access key provided by either a Program Status Word or an operand of a program instruction being executed. Fetch operations from the desired block of data are permitted only if the program access key associated with the virtual address is equal to the segment access control field.
    Type: Grant
    Filed: December 4, 2017
    Date of Patent: September 24, 2019
    Assignee: International Business Machines Corporation
    Inventors: Dan F. Greiner, Charles W. Gainey, Jr., Lisa C. Heller, Damian L. Osisek, Erwin Pfeffer, Timothy J. Slegel, Charles F. Webb
  • Patent number: 10354732
    Abstract: Devices and techniques for NAND temperature data management are disclosed herein. A command to write data to a NAND component in the NAND device is received at a NAND controller of the NAND device. A temperature corresponding to the NAND component is obtained in response to receiving the command. The command is then executed to write data to the NAND component and to write a representation of the temperature. The data is written to a user portion and the representation of the temperature is written to a management portion that is accessible only to the controller and segregated from the user portion.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: July 16, 2019
    Assignee: Micron Technology, Inc.
    Inventors: Kishore Kumar Muchherla, Sampath Ratnam, Preston Thomson, Harish Singidi, Jung Sheng Hoei, Peter Sean Feeley, Jianmin Huang
  • Patent number: 10346162
    Abstract: An approach for replacement of instructions in an assembly language program includes computers receiving an assembly language program and user selections of one or more classes of instructions. The approach includes computers reading a statement in the program and selecting a class of instructions from the user selections. The approach includes computers selecting a first group of instructions in the selected class and determining that the statement is an instruction in the first group of instructions. The approach includes computers reading a number of statements that match a number of instructions in the first group of instructions including the statement and replacing the first group of instructions with a group of replacement instructions when the read number of statements match the number of instructions in the first group of instructions. Furthermore, the approach includes computers sending the group of replacement instructions to output to update the assembly language program.
    Type: Grant
    Filed: November 17, 2015
    Date of Patent: July 9, 2019
    Assignee: International Business Machines Corporation
    Inventors: John R. Dravnieks, John R. Ehrman, Dan F. Greiner
  • Patent number: 10338928
    Abstract: A processor, method, and medium for implementing a call return stack within a pipelined processor. A stack head register is used to store a copy of the top entry of the call return stack, and the stack head register is accessed by the instruction fetch unit on each fetch cycle. If a fetched instruction is decoded as a return instruction, the speculatively read address from the static register is utilized as a target address to fetch subsequent instructions and the address at the second entry from the top of the call return stack is written to the stack head register.
    Type: Grant
    Filed: May 20, 2011
    Date of Patent: July 2, 2019
    Assignee: Oracle International Corporation
    Inventors: Manish K. Shah, Zeid H. Samoail
  • Patent number: 10282292
    Abstract: Cluster manager functional blocks perform operations for migrating pages in portions in corresponding migration clusters. During operation, each cluster manager keeps an access record that includes information indicating accesses of pages in the portions in the corresponding migration cluster. Based on the access record and one or more migration policies, each cluster manager migrates pages between the portions in the corresponding migration cluster.
    Type: Grant
    Filed: October 17, 2016
    Date of Patent: May 7, 2019
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Andreas Prodromou, Mitesh R. Meswani, Arkaprava Basu, Nuwan S. Jayasena, Gabriel H. Loh