Addressing Cache Memories Patents (Class 711/3)
-
Patent number: 11409659Abstract: A device includes a memory controller and a cache memory coupled to the memory controller. The cache memory has a first set of cache lines associated with a first memory block and comprising a first plurality of cache storage locations, as well as a second set of cache lines associated with a second memory block and comprising a second plurality of cache storage locations. A first location of the second plurality of cache storage locations comprises cache tag data for both the first set of cache lines and the second set of cache lines.Type: GrantFiled: April 2, 2021Date of Patent: August 9, 2022Assignee: Rambus Inc.Inventors: Michael Raymond Miller, Dennis Doidge, Collins Williams
-
Patent number: 11403231Abstract: Hash-based application programming interface (API) importing can be prevented by allocating a name page and a guard page in memory. The name page and the guard page being associated with (i) an address of names array, (ii) an address of name ordinal array, and (iii) an address of functions array that are all generated by an operating system upon initiation of an application. The name page can then be filled with valid non-zero characters. Thereafter, protections on the guard page can be changed to no access. An entry is inserted into the address of names array pointing to a relative virtual address corresponding to anywhere within the name page. Access to the guard page causes the requesting application to terminate. Related apparatus, systems, techniques and articles are also described.Type: GrantFiled: November 19, 2020Date of Patent: August 2, 2022Assignee: Cylance Inc.Inventor: Jeffrey Tang
-
Patent number: 11361281Abstract: Methods and systems for expense management, comprising: retrieving at least one electronic feed of charges for multiple expense receipt records directly from at least one lodging and/or transportation vendor, the at least one feed of charges including computer-readable electronic transaction data; detecting that at least one expense receipt record from the multiple expense receipt records from the at least one feed of charges is comprised of two or more line items; mapping the two or more line items to at least one transportation and/or lodging good and/or service that is chargeable to at least one account identifier, the mapping utilizing vendor expense codes and/or keyword searches; and pre-populating the at least one transportation and/or lodging good and/or service mapped to each of the two or more line items from the at least one expense receipt record in at least one expense report in at least one expense management system as two or more expense itemizations.Type: GrantFiled: December 27, 2019Date of Patent: June 14, 2022Assignee: SAP SEInventors: Michael Fredericks, Joseph Dunnick, Valery Gorodnichev, Jeannine Armstrong
-
Patent number: 11347650Abstract: A method includes, for each data value in a set of one or more data values, determining a boundary between a high order portion of the data value and a low order portion of the data value, storing the low order portion at a first memory location utilizing a low data fidelity storage scheme, and storing the high order portion at a second memory location utilizing a high data fidelity storage scheme for recording data at a higher data fidelity than the low data fidelity storage scheme.Type: GrantFiled: February 7, 2018Date of Patent: May 31, 2022Assignee: Advanced Micro Devices, Inc.Inventors: David A. Roberts, Elliot H. Mednick
-
Patent number: 11327768Abstract: An arithmetic processing apparatus includes an arithmetic circuit configured to perform an arithmetic operation on data having a first data width and perform an instruction in parallel on each element of data having a second data width, and a cache memory configured to store data, wherein the cache memory includes a tag circuit storing tags for respective ways, a data circuit storing data for the respective ways, a determination circuit that determines a type of an instruction with respect to whether data accessed by the instruction has the first data width or the second data width, and a control circuit that performs either a first pipeline operation where the tag circuit and the data circuit are accessed in parallel or a second pipeline operation where the data circuit is accessed in accordance with a tag result after accessing the tag circuit, based on a result determined by the determination circuit.Type: GrantFiled: December 9, 2019Date of Patent: May 10, 2022Assignee: FUJITSU LIMITEDInventor: Noriko Takagi
-
Patent number: 11314595Abstract: A RAID controller periodically collects an indication of a current compression ratio achieved by each of a plurality of storage devices within the RAID. The RAID controller determines a placement of data and the parity information within at least one of the plurality of storage devices according to at least one of a plurality of factors associated with the current compression ratio. The RAID controller writes the data and the parity information to the at least one of the plurality of storage devices according to the determined placement.Type: GrantFiled: December 21, 2020Date of Patent: April 26, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Roman Alexander Pletka, Sasa Tomic, Timothy Fisher, Nikolaos Papandreou, Nikolas Ioannou, Aaron Fry
-
Patent number: 11314654Abstract: A mechanism is described for facilitating optimization of cache associated with graphics processors at computing devices. A method of embodiments, as described herein, includes introducing coloring bits to contents of a cache associated with a processor including a graphics processor, wherein the coloring bits to represent a signal identifying one or more caches available for use, while avoiding explicit invalidations and flushes.Type: GrantFiled: November 19, 2020Date of Patent: April 26, 2022Assignee: Intel CorporationInventors: Altug Koker, Balaji Vembu, Joydeep Ray, Abhishek R. Appu
-
Patent number: 11314647Abstract: Methods and systems for managing synonyms in VIPT caches are disclosed. A method includes tracking lines of a copied cache using a directory, examining a specified bit of a virtual address that is associated with a load request and determining its status and making an entry in one of a plurality of parts of the directory based on the status of the specified bit of the virtual address that is examined. The method further includes updating one of, and invalidating the other of, a cache line that is associated with the virtual address that is stored in a first index of the copied cache, and a cache line that is associated with a synonym of the virtual address that is stored at a second index of the copied cache, upon receiving a request to update a physical address associated with the virtual address.Type: GrantFiled: December 23, 2019Date of Patent: April 26, 2022Assignee: INTEL CORPORATIONInventor: Karthikeyan Avudaiyappan
-
Patent number: 11303550Abstract: Described embodiments provide systems and methods for monitoring server utilization and reallocating resources using upper bound values. A device can determine a value indicative of an upper bound of a processing load of a server using data points detected for the processing load over a first range of time. The upper bound can correspond to a percentage of the processing load during the first range of time. The device can monitor, using the value, the processing load of the server over a second range of time. A determination can be made whether the value of the processing load is greater than a threshold during the second range of time. The device can generate an alert for the device responsive to a comparison of the value of the processing load to the threshold.Type: GrantFiled: August 25, 2020Date of Patent: April 12, 2022Assignee: Citrix Systems, Inc.Inventors: Andreas Varnavas, Satyendra Tiwari, Manikam Muthiah, Nikolaos Georgakopoulos
-
Patent number: 11287993Abstract: Techniques involve: determining corresponding valid metadata rates of a plurality of metadata blocks stored in a metadata storage area of a storage system, the valid metadata rate of each metadata block indicating a ratio of valid metadata in the metadata block to all metadata in the metadata block; selecting a predetermined number of metadata blocks having a valid metadata rate lower than a first valid metadata rate threshold from the plurality of metadata blocks; storing valid metadata in the predetermined number of metadata blocks into at least one metadata block following the plurality of metadata blocks in the metadata storage area; and making the valid metadata in the predetermined number of metadata blocks invalid. Accordingly, such techniques can improve the efficiency of the storage system.Type: GrantFiled: January 21, 2020Date of Patent: March 29, 2022Assignee: EMC IP Holding Company LLCInventors: Shaoqin Gong, Jibing Dong, Hongpo Gao, Jianbin Kang, Baote Zhuo
-
Patent number: 11275716Abstract: An approach is provided for tracking a single process instance flow. A request is received in a first system of a multi-system environment. Log files are pulled from systems with which the request interacts. Log entries are captured for the request. The log files are combined and flattened into a chronological log. A predictive model is built from an order of entries in the chronological log. Correlation keys in the entries of the chronological log are identified. Logs specifying processing of multiple ongoing requests are aggregated. A process instance of interest to a user is received. Instance specific log files are generated by deflattening the aggregated logs and by using a pattern detection algorithm that uses the predictive model and an alternate identifier algorithm that uses the correlation keys. One of the generated instance specific log files specifies a flow of the process instance of interest.Type: GrantFiled: May 26, 2020Date of Patent: March 15, 2022Assignee: International Business Machines CorporationInventors: Zachary A. Silverstein, Kelly Camus, Tiberiu Suto, Andrew R. Jones
-
Patent number: 11243773Abstract: A computer system, includes a store queue that holds store entries and a load queue that holds load entries sleeping on a store entry. A processor detects a store drain merge operation call and generates a pair of store tags comprising a first store tag corresponding to a first store entry to be drained and a second store tag corresponding to a second store entry to be drained. The processor determines the pair of store tags an even-type store tag or an odd-type store tag. The processor disables the odd store tag included in the even-type store tag pair when detecting the even-type store tag pair, and wakes up a first load entry dependent on the even store tag and a second load entry dependent on the odd store tag based on the even store tag included in the even-type store tag pair while the odd store tag is disabled.Type: GrantFiled: December 14, 2020Date of Patent: February 8, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bryan Lloyd, David Campbell, Brian Chen, Robert A. Cordes
-
Patent number: 11237758Abstract: A method and apparatus of wear leveling control for storage class memory are disclosed. According to the present invention, where current data to be written to a nonvolatile memory corresponds to an address cache hit is determined. If the current data to be written corresponds to an address cache hit, the current data are written to a designated location in the nonvolatile memory different from a destined location in the nonvolatile memory. If the current data to be written corresponds to an address cache miss, the current data are written to the destined location in the nonvolatile memory. In another embodiment, the wear leveling control technique also includes address rotation process to achieve long-term wear leveling as well.Type: GrantFiled: November 14, 2018Date of Patent: February 1, 2022Assignee: Wolley Inc.Inventor: Chuen-Shen Bernard Shung
-
Patent number: 11232039Abstract: Systems, apparatuses, and methods for efficiently performing memory accesses in a computing system are disclosed. A computing system includes one or more clients, a communication fabric and a last-level cache implemented with low latency, high bandwidth memory. The cache controller for the last-level cache determines a range of addresses corresponding to a first region of system memory with a copy of data stored in a second region of the last-level cache. The cache controller sends a selected memory access request to system memory when the cache controller determines a request address of the memory access request is not within the range of addresses. The cache controller services the selected memory request by accessing data from the last-level cache when the cache controller determines the request address is within the range of addresses.Type: GrantFiled: December 10, 2018Date of Patent: January 25, 2022Assignee: Advanced Micro Devices, Inc.Inventor: Gabriel H. Loh
-
Patent number: 11188475Abstract: A technique is provided for managing caches in a cache hierarchy. An apparatus has processing circuitry for performing operations and a plurality of caches for storing data for reference by the processing circuitry when performing the operations. The plurality of caches form a cache hierarchy including a given cache at a given hierarchical level and a further cache at a higher hierarchical level. The given cache is a set associative cache having a plurality of cache ways, and the given cache and the further cache are arranged such that the further cache stores a subset of the data in the given cache. In response to an allocation event causing data for a given memory address to be stored in the further cache, the given cache issues a way indication to the further cache identifying which cache way in the given cache the data for the given memory address is stored in.Type: GrantFiled: October 2, 2020Date of Patent: November 30, 2021Assignee: Arm LimitedInventors: Joseph Michael Pusdesris, Balaji Vijayan
-
Patent number: 11176043Abstract: A method for using a distributed memory device in a memory augmented neural network system includes receiving, by a controller, an input query to access data stored in the distributed memory device, the distributed memory device comprising a plurality of memory banks. The method further includes determining, by the controller, a memory bank selector that identifies a memory bank from the distributed memory device for memory access, wherein the memory bank selector is determined based on a type of workload associated with the input query. The method further includes computing, by the controller and by using content based access, a memory address in the identified memory bank. The method further includes generating, by the controller, an output in response to the input query by accessing the memory address.Type: GrantFiled: April 2, 2020Date of Patent: November 16, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ahmet Serkan Ozcan, Tomasz Kornuta, Carl Radens, Nicolas Antoine
-
Patent number: 11163691Abstract: Examples of the present disclosure relate to an apparatus comprising processing circuitry to perform data processing operations, storage circuitry to store data for access by the processing circuitry, address translation circuitry to maintain address translation data for translating virtual memory addresses into corresponding physical memory addresses, and prefetch circuitry. The prefetch circuitry is arranged to prefetch first data into the storage circuitry in anticipation of the first data being required for performing the data processing operations. The prefetching comprises, based on a prediction scheme, predicting a first virtual memory address associated with the first data, accessing the address translation circuitry to determine a first physical memory address corresponding to the first virtual memory address, and retrieving the first data based on the first physical memory address corresponding to the first virtual memory address.Type: GrantFiled: June 25, 2019Date of Patent: November 2, 2021Assignee: ARM LIMITEDInventors: Stefano Ghiggini, Natalya Bondarenko, Damien Guillaume Pierre Payet, Lucas Garcia
-
Patent number: 11113006Abstract: A memory sub-system configured to dynamically generate a media layout to avoid media access collisions in concurrent streams. The memory sub-system can identify plurality of media units that are available to write data concurrently, select commands from the plurality of streams for concurrent execution in the available media units, generate and store a portion of a media layout dynamically in response to the commands being selected for concurrent execution in the plurality of media units, and executing the selected commands concurrently by storing data into the memory units according to physical addresses to which logical addresses used in the selected commands are mapped in the dynamically generated portion of the media layout.Type: GrantFiled: May 1, 2020Date of Patent: September 7, 2021Assignee: Micron Technology, Inc.Inventor: Sanjay Subbarao
-
Patent number: 11086782Abstract: A method of logging process data in a PLC controlled equipment is disclosed. Sections of a PLC application code comprise tasks configured to execute program functions at specific execution rates. Each of the tasks comprise program functions having dedicated memory areas assigned as tags, and each data entry of the process data comprises a tag value and an associated process value. The method comprises receiving process data from the PLC application code, assigning process values to threads of a thread pool, and receiving, for each of the threads, in a respective data table associated with each of the threads, the tag- and process values of the received process data, and determining a hash code for each of the tags according to a hash function of the respective data table for arranging the tag values and the associated process values in the respective data table according to said hash code.Type: GrantFiled: June 18, 2018Date of Patent: August 10, 2021Assignee: TETRA LAVAL HOLDINGS & FINANCE S.A.Inventors: Lukas Karlsson, Ashraf Zarur
-
Patent number: 11074183Abstract: A method and apparatus for read wearing control for storage class memory (SCM) are disclosed. The read data control apparatus, located between a host and the SCM subsystem, comprises a read data cache, an address cache and an SCM controller. The address cache stores pointers pointing to data stored in logging area(s) located in the SCM. For a read request, the read wearing control determines whether the read request is a read data cache hit, an address cache hit or neither (i.e., read data cache miss and address cache miss). For the read data cache hit, the requested data is returned from the read data cache. For the address cache hit, the requested data is returned from the logging area(s) and the read data becomes a candidate to be placed in the read data cache. For read data cache and address cache misses, the requested data is returned from SCM.Type: GrantFiled: December 28, 2019Date of Patent: July 27, 2021Assignee: Wolley Inc.Inventors: Yu-Ming Chang, Tai-Chun Kuo, Chuen-Shen Bernard Shung
-
Patent number: 11064170Abstract: A laser projection device, including at least one laser diode for generating at least one laser beam. The laser projection device includes at least one laser driver for operating the at least one laser diode; at least one image processing circuit to supply control data for the at least one laser driver; at least one central processor and/or driver; and at least one control and/or regulating unit, which is configured to block the at least one laser driver in response to an occurrence of at least one fault. The at least one control and/or regulating unit includes at least one nonvolatile memory, in which at least a minimum set of monitoring functions is stored.Type: GrantFiled: September 27, 2018Date of Patent: July 13, 2021Assignee: Robert Bosch GmbHInventors: Alexander Ehlert, Christoph Puttmann, Christoph Delfs
-
Patent number: 11048637Abstract: A high-frequency and low-power L1 cache and associated access technique. The method may include inspecting a virtual address of an L1 data cache load instruction, and indexing into a row and a column of a way predictor table using metadata and a virtual address associated with the load instruction. The method may include matching information stored at the row and the column of the way predictor table to a location of a cache line. The method may include predicting the location of the cache line within the L1 data cache based on the information match. A hierarchy of way predictor tables may be used, with higher level way predictor tables refreshing smaller lower level way predictor tables. The way predictor tables may be trained to make better predictions over time. Only selected circuit macros need to be enabled based on the predictions, thereby saving power.Type: GrantFiled: August 21, 2019Date of Patent: June 29, 2021Inventor: Karthik Sundaram
-
Patent number: 11037630Abstract: Devices and techniques for NAND temperature data management are disclosed herein. A command to write data to a NAND component in the NAND device is received at a NAND controller of the NAND device. A temperature corresponding to the NAND component is obtained in response to receiving the command. The command is then executed to write data to the NAND component and to write a representation of the temperature. The data is written to a user portion and the representation of the temperature is written to a management portion that is accessible only to the controller and segregated from the user portion.Type: GrantFiled: April 23, 2020Date of Patent: June 15, 2021Assignee: Micron Technology, Inc.Inventors: Kishore Kumar Muchherla, Sampath Ratnam, Preston Allen Thomson, Harish Reddy Singidi, Jung Sheng Hoei, Peter Sean Feeley, Jianmin Huang
-
Patent number: 10977187Abstract: A method for performing access management in a memory device, the associated memory device and the controller thereof, and the associated electronic device are provided. The method may include: receiving a host command and a logical address from a host device; performing at least one checking operation to obtain at least one checking result, for determining whether to load a logical-to-physical (L2P) table from the NV memory to a random access memory (RAM) of the memory device, wherein the L2P table includes address mapping information for accessing the target data, and performing the at least one checking operation to obtain at least one checking result includes checking whether a first L2P-table index pointing toward the L2P table and a second L2P-table index sent from the host device are equivalent to each other; and reading the target data from the NV memory, and sending the target data to the host device.Type: GrantFiled: January 2, 2018Date of Patent: April 13, 2021Assignee: Silicon Motion, Inc.Inventors: Jie-Hao Lee, Cheng-Yu Yu
-
Patent number: 10910079Abstract: A programming device (110) arranged to obtain and store a random bit string in a memory device (100), the memory device (100) comprising multiple one-time programmable memory cells (122), a memory cell having a programmed state and a not-programmed state, the memory cell being one-time programmable by changing the state from the not-programmed state to the programmed state through application of an electric programming energy to the memory cell.Type: GrantFiled: April 28, 2017Date of Patent: February 2, 2021Assignee: INTRINSIC ID B.V.Inventors: Pim Theo Tuyls, Geert Jan Schrijen, Vincent Van Der Leest
-
Patent number: 10902129Abstract: A method, an apparatus, and a storage medium for detecting vulnerabilities in software to protect a computer system from security and compliance breaches are provided. The method includes providing a ruleset code declaring programming interfaces of a target framework and including rules that define an admissible execution context when invoking the programming interfaces, providing a source code to be scanned for vulnerabilities; compiling the source code into a first execution code having additional instructions inserted to facilitate tracking of an actual execution context of the source code, compiling the ruleset code into a second execution code that can be executed together with the first execution code, executing the first execution code within an virtual machine and passing calls of the programming interfaces to the second execution code, and detecting a software vulnerability when the actual execution context disagrees with the admissible execution context.Type: GrantFiled: December 7, 2017Date of Patent: January 26, 2021Assignee: Virtual Forge GmbHInventors: Hans-Christian Esperer, Yun Ding, Thomas Kastner, Markus Schumacher
-
Patent number: 10846232Abstract: A mechanism is described for facilitating optimization of cache associated with graphics processors at computing devices. A method of embodiments, as described herein, includes introducing coloring bits to contents of a cache associated with a processor including a graphics processor, wherein the coloring bits to represent a signal identifying one or more caches available for use, while avoiding explicit invalidations and flushes.Type: GrantFiled: November 12, 2019Date of Patent: November 24, 2020Assignee: INTEL CORPORATIONInventors: Altug Koker, Balaji Vembu, Joydeep Ray, Abhishek R. Appu
-
Patent number: 10797766Abstract: Systems, methods, computer program products, and devices reduce computational processing performed by at least one computer processor that computes an eigensystem from a first data set; computes updated eigenvalues that approximate an eigensystem of at least a second data set based on the eigensystem of the first data set; and evaluates a plurality of features in each of the first and at least second data sets using a cost function; wherein reducing the computational processing of the at least one computer processor is achieved by at least one of selecting the cost function to comprise fewer than the total number of eigenvalues and employing a coarse approximation of the eigenvalues to de-select at least one of the data sets. This is especially useful for learning and/or online processing in an artificial neural network.Type: GrantFiled: February 2, 2020Date of Patent: October 6, 2020Assignee: Genghiscomm Holdings, LLCInventor: Steve Shattil
-
Patent number: 10768858Abstract: According to one embodiment, a memory system receives, from a host, a write request including a first identifier associated with one write destination block and storage location information indicating a location in a write buffer on a memory of the host in which first data to be written is stored. When the first data is to be written to a nonvolatile memory, the memory system obtains the first data from the write buffer by transmitting a transfer request including the storage location information to the host, transfers the first data to the nonvolatile memory, and writes the first data to the one write destination block.Type: GrantFiled: June 11, 2018Date of Patent: September 8, 2020Assignee: Toshiba Memory CorporationInventors: Shinichi Kanno, Hideki Yoshida
-
Patent number: 10769013Abstract: Various embodiments provide for caching of error checking data for memory having inline storage configurations for primary data and error checking data for the primary data. In particular, various embodiments described herein provide for error checking data caching and cancellation of error checking data read commands for memory having inline storage configurations for primary data and associated error checking data. Additionally, various embodiments described herein provide for combining/canceling of error checking data write commands for memory having inline storage configurations for primary data and associated error checking data.Type: GrantFiled: June 11, 2018Date of Patent: September 8, 2020Assignee: Cadence Design Systems, Inc.Inventors: John M. MacLaren, Landon Laws, Carl Nels Olson, Thomas J. Shepherd
-
Patent number: 10754783Abstract: Examples include techniques to manage cache resource allocations associated with one or more cache class of service (CLOS) assignments for a processor cache. Examples include flushing portions of an allocated cache resource responsive to reassignments of CLOS.Type: GrantFiled: June 29, 2018Date of Patent: August 25, 2020Assignee: Intel CorporationInventors: Tomasz Kantecki, John Browne, Chris Macnamara, Timothy Verrall, Marcel Cornu, Eoin Walsh, Andrew J. Herdrich
-
Patent number: 10642685Abstract: A cache memory has cache memory circuitry comprising a nonvolatile memory cell to store at least a portion of a data which is stored or is to be stored in a lower-level memory than the cache memory circuitry, a first redundancy code storage comprising a nonvolatile memory cell capable of storing a redundancy code of the data stored in the cache memory circuitry, and a second redundancy code storage comprising a volatile memory cell capable of storing the redundancy code.Type: GrantFiled: September 12, 2016Date of Patent: May 5, 2020Assignee: Kioxia CorporationInventors: Kazutaka Ikegami, Shinobu Fujita, Hiroki Noguchi
-
Patent number: 10628323Abstract: An operating method for a data storage device includes providing a nonvolatile memory device including a plurality of pages; segmenting an address map which maps a logical address provided from a host device and a physical address of the nonvolatile memory device, by a plurality of address map segments according to a segment size that is set depending on a quality of service time allowed to process a request of the host device and an unprocessed workload; and flushing at least one of the address map segments in the nonvolatile memory device after processing the unprocessed workload.Type: GrantFiled: December 1, 2017Date of Patent: April 21, 2020Assignee: SK hynix Inc.Inventors: Min Hwan Moon, Duck Hoi Koo, Soong Sun Shin, Ji Hoon Lee
-
Patent number: 10620958Abstract: Systems, apparatuses, and methods for efficiently reducing power consumption in a crossbar of a computing system are disclosed. A data transfer crossbar uses a first interface for receiving data fetched from a data storage device that is partitioned into multiple banks. The crossbar uses a second interface for sending data fetched from the multiple banks to multiple compute units. Logic in the crossbar selects data from a most recent fetch operation for a given compute unit when the logic determines the given compute unit is an inactive compute unit for which no data is being fetched. The logic sends via the second interface the selected data for the given compute unit. Therefore, when the given compute unit is inactive, the data lines for the fetched data do not transition for each inactive clock cycle after the most recent active clock cycle.Type: GrantFiled: December 3, 2018Date of Patent: April 14, 2020Assignee: Advanced Micro Devices, Inc.Inventor: Xianwen Cheng
-
Patent number: 10613796Abstract: According to one embodiment, a memory system receives from a host a first write request including a first block identifier designating a first write destination block to which first write data is to be written. The memory system acquires the first write data from a write buffer temporarily holding write data corresponding to each of the write requests, and writes the first write data to a write destination page in the first write destination block. The memory system releases a region in the write buffer, storing data which is made readable from the first write destination block by writing the first write data to the write destination page. The data made readable is a data of a page in the first write destination block preceding the write destination page.Type: GrantFiled: September 10, 2018Date of Patent: April 7, 2020Assignee: Toshiba Memory CorporationInventors: Shinichi Kanno, Hideki Yoshida, Naoki Esaka
-
Patent number: 10599333Abstract: A storage device includes a nonvolatile semiconductor memory device, and a controller configured to access the nonvolatile semiconductor memory device. When the controller receives a write command including a logical address, the controller determines a physical location of the memory device in which data are written and stores a mapping from the logical address to the physical location. When the controller receives a write command without a logical address, the controller determines a physical location of the memory device in which data are written and returns the physical location.Type: GrantFiled: August 31, 2016Date of Patent: March 24, 2020Assignee: TOSHIBA MEMORY CORPORATIONInventor: Daisuke Hashimoto
-
Patent number: 10514855Abstract: A memory access request including an address is received from a memory controller of an application server. One of a plurality of paths to the NVRAM is selected based on the address from the memory access request.Type: GrantFiled: December 19, 2012Date of Patent: December 24, 2019Assignee: Hewlett Packard Enterprise Development LPInventor: Douglas L Voigt
-
Patent number: 10503502Abstract: A processor includes a decode unit to decode an instruction indicating a source packed data operand having source data elements and indicating a destination storage location. Each of the source data elements has a source data element value and a source data element position. An execution unit, in response to the instruction, stores a result packed data operand having result data elements each having a result data element value and a result data element position. Each result data element value is one of: (1) equal to a source data element position of a source data element, closest to one end of the source operand, having a source data element value equal to the result data element position of the result data element; and (2) a replacement value, when no source data element has a source data element value equal to the result data element position of the result data element.Type: GrantFiled: September 25, 2015Date of Patent: December 10, 2019Assignee: INTEL CORPORATIONInventors: Christopher J. Hughes, Jong Soo Park
-
Patent number: 10452397Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to determine a first number of threads to be scheduled for each context of a plurality of contexts in a multi-context processing system, allocate a second number of streaming multiprocessors (SMs) to the respective plurality of contexts, and dispatch threads from the plurality of contexts only to the streaming multiprocessor(s) allocated to the respective plurality of contexts. Other embodiments are also disclosed and claimed.Type: GrantFiled: April 1, 2017Date of Patent: October 22, 2019Assignee: INTEL CORPORATIONInventors: Joydeep Ray, Altug Koker, Balaji Vembu, Abhishek R. Appu, Kamal Sinha, Prasoonkumar Surti, Kiran C. Veernapu
-
Patent number: 10423539Abstract: What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated and an initial origin address of a translation table of the hierarchy of translation tables are obtained. Based on the origin address, a segment table entry is obtained which contains a format control field and an access validity field. If the format control and access validity are enabled, the segment table entry further contains an access control and fetch protection fields, and a segment-frame absolute address. Store operations to the block of data are permitted only if the access control field matches a program access key provided by either a Program Status Word or an operand of a program instruction being executed. Fetch operations from the desired block of data are permitted only if the program access key associated with the virtual address is equal to the segment access control field.Type: GrantFiled: December 4, 2017Date of Patent: September 24, 2019Assignee: International Business Machines CorporationInventors: Dan F. Greiner, Charles W. Gainey, Jr., Lisa C. Heller, Damian L. Osisek, Erwin Pfeffer, Timothy J. Slegel, Charles F. Webb
-
Patent number: 10354732Abstract: Devices and techniques for NAND temperature data management are disclosed herein. A command to write data to a NAND component in the NAND device is received at a NAND controller of the NAND device. A temperature corresponding to the NAND component is obtained in response to receiving the command. The command is then executed to write data to the NAND component and to write a representation of the temperature. The data is written to a user portion and the representation of the temperature is written to a management portion that is accessible only to the controller and segregated from the user portion.Type: GrantFiled: August 30, 2017Date of Patent: July 16, 2019Assignee: Micron Technology, Inc.Inventors: Kishore Kumar Muchherla, Sampath Ratnam, Preston Thomson, Harish Singidi, Jung Sheng Hoei, Peter Sean Feeley, Jianmin Huang
-
Patent number: 10346162Abstract: An approach for replacement of instructions in an assembly language program includes computers receiving an assembly language program and user selections of one or more classes of instructions. The approach includes computers reading a statement in the program and selecting a class of instructions from the user selections. The approach includes computers selecting a first group of instructions in the selected class and determining that the statement is an instruction in the first group of instructions. The approach includes computers reading a number of statements that match a number of instructions in the first group of instructions including the statement and replacing the first group of instructions with a group of replacement instructions when the read number of statements match the number of instructions in the first group of instructions. Furthermore, the approach includes computers sending the group of replacement instructions to output to update the assembly language program.Type: GrantFiled: November 17, 2015Date of Patent: July 9, 2019Assignee: International Business Machines CorporationInventors: John R. Dravnieks, John R. Ehrman, Dan F. Greiner
-
Patent number: 10338928Abstract: A processor, method, and medium for implementing a call return stack within a pipelined processor. A stack head register is used to store a copy of the top entry of the call return stack, and the stack head register is accessed by the instruction fetch unit on each fetch cycle. If a fetched instruction is decoded as a return instruction, the speculatively read address from the static register is utilized as a target address to fetch subsequent instructions and the address at the second entry from the top of the call return stack is written to the stack head register.Type: GrantFiled: May 20, 2011Date of Patent: July 2, 2019Assignee: Oracle International CorporationInventors: Manish K. Shah, Zeid H. Samoail
-
Patent number: 10282292Abstract: Cluster manager functional blocks perform operations for migrating pages in portions in corresponding migration clusters. During operation, each cluster manager keeps an access record that includes information indicating accesses of pages in the portions in the corresponding migration cluster. Based on the access record and one or more migration policies, each cluster manager migrates pages between the portions in the corresponding migration cluster.Type: GrantFiled: October 17, 2016Date of Patent: May 7, 2019Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Andreas Prodromou, Mitesh R. Meswani, Arkaprava Basu, Nuwan S. Jayasena, Gabriel H. Loh
-
Patent number: 10228918Abstract: Processor hardware detects when memory aliasing occurs, and assures proper operation of the code even in the presence of memory aliasing. Because the hardware can detect and correct for memory aliasing, this allows a compiler to make optimizations such as register promotion even in regions of the code where memory aliasing can occur. The result is code that is more optimized and therefore runs faster.Type: GrantFiled: November 29, 2017Date of Patent: March 12, 2019Assignee: International Business Machines CorporationInventors: Srinivasan Ramani, Rohit Taneja
-
Patent number: 10209926Abstract: When the storage system according to the present invention receives a request for writing new data to a first logical volume after having received a first pair creating request, the storage system stores the new data in a cache memory. Then when the storage system subsequently receives a second pair creating request, even if the cache memory still has stored therein the data identical to the data that was stored on the first logical volume at the point in time when the storage system received the first pair formation request, and even if this identical data has not yet been copied to the second logical volume, the storage system omits to copy this identical data to the second logical volume.Type: GrantFiled: November 18, 2014Date of Patent: February 19, 2019Assignee: Hitachi, Ltd.Inventors: Masamitsu Takahashi, Yutaka Takata
-
Patent number: 10176118Abstract: A method includes storing a first block of main memory in a cache line of a direct-mapped cache, storing a first tag in a current tag field of the cache line, wherein the first tag identifies a first memory address for the first block of main memory, and storing a second tag in a previous miss tag field of the cache line in response to receiving a memory reference having a tag that does not match the tag stored in the current tag field. The second tag identifies a second memory address for a second block of main memory, and the first and second blocks are both mapped to the cache line. The method may further include storing a binary value in a last reference bit field to indicate whether the most recently received memory reference was directed to the current tag field or previous miss tag field.Type: GrantFiled: March 31, 2016Date of Patent: January 8, 2019Assignee: Lenovo Enterprise Solutions (Singapore) Pte. Ltd.Inventor: Daniel J. Colglazier
-
Patent number: 10152451Abstract: Methods and apparatus are disclosed using an index array and finite state machine for scatter/gather operations. Embodiment of apparatus may comprise: decode logic to decode scatter/gather instructions and generate micro-operations. An index array holds a set of indices and a corresponding set of mask elements. A finite state machine facilitates the scatter operation. Address generation logic generates an address from an index of the set of indices for at least each of the corresponding mask elements having a first value. Storage is allocated in a buffer for each of the set of addresses being generated. Data elements corresponding to the set of addresses being generated are copied to the buffer. Addresses from the set are accessed to store data elements if a corresponding mask element has said first value and the mask element is changed to a second value responsive to completion of their respective stores.Type: GrantFiled: April 18, 2017Date of Patent: December 11, 2018Assignee: Intel CorporationInventors: Zeev Sperber, Robert Valentine, Shlomo Raikin, Stanislav Shwartsman, Gal Ofir, Igor Yanover, Guy Patkin, Ofer Levy
-
Patent number: 10114651Abstract: According to a first aspect, efficient data transfer operations can be achieved by: decoding by a processor device, a single instruction specifying a transfer operation for a plurality of data elements between a first storage location and a second storage location; issuing the single instruction for execution by an execution unit in the processor; detecting an occurrence of an exception during execution of the single instruction; and in response to the exception, delivering pending traps or interrupts to an exception handler prior to delivering the exception.Type: GrantFiled: January 4, 2018Date of Patent: October 30, 2018Assignee: Intel CorporationInventors: Christopher J. Hughes, Yen-Kuang (Y. K.) Chen, Mayank Bomb, Jason W. Brandt, Mark J. Buxton, Mark J. Charney, Srinivas Chennupaty, Jesus Corbal, Martin G. Dixon, Milind B. Girkar, Jonathan C. Hall, Hideki (Saito) Ido, Peter Lachner, Gilbert Neiger, Chris J. Newburn, Rajesh S. Parthasarathy, Bret L. Toll, Robert Valentine, Jeffrey G. Wiedemeier
-
Patent number: 10091282Abstract: The disclosure generally describes computer-implemented methods, computer program products, and systems for providing metadata-driven dynamic load balancing in multi-tenant systems. A computer-implemented method includes: identifying a request related to a model-based application executing in a multi-tenant system associated with a plurality of application servers and identifying at least one object in the model-based application associated with the request. At least one application server is identified as associated with a locally-cached version of a runtime version of the identified object, and a determination of a particular one of the identified application servers to send the identified request for processing is based on a combination of the availability of a locally-cached version of the runtime version at the particular application server and the server's processing load. The request is then sent to the determined application server for processing.Type: GrantFiled: June 12, 2013Date of Patent: October 2, 2018Assignee: SAP SEInventors: Bare Said, Frank Jentsch, Frank Brunswig