Multiport Cache Patents (Class 711/131)
  • Patent number: 11960403
    Abstract: System and techniques for variable execution time atomic operations are described herein. When an atomic operation for a memory device is received, the run length of the operation is measured. If the run length is beyond a threshold, a cache line for the operation is locked while the operation runs. A result of the operation is queued until it can be written to the cache line. At that point, the cache line is unlocked.
    Type: Grant
    Filed: August 30, 2022
    Date of Patent: April 16, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Dean E. Walker, Tony M. Brewer
  • Patent number: 11960638
    Abstract: In one example, a mobile device comprises: a physical link; a plurality of image sensors, each image sensor being configured to transmit image data via the physical link; and a controller coupled to the physical link, whereby the physical link, the plurality of image sensors, and the controller form a multi-drop network. The controller is configured to: transmit a control signal to configure image sensing operations at the plurality of image sensors; receive, via the physical link, image data from at least a subset of the plurality of image sensors; combine the image data from the at least a subset of the plurality of image sensors to obtain an extended field of view (FOV); determine information of a surrounding environment of the mobile device captured within the extended FOV; and provide the information to an application to generate content based on the information.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: April 16, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Andrew Samuel Berkovich, Xinqiao Liu, Hans Reyserhove
  • Patent number: 11860719
    Abstract: A method for implementing storage service continuity in a storage system includes a front-end interface detecting a status of a first storage controller. The storage system includes the front-end interface card and a plurality of storage controllers. The front-end interface card communicates with the storage controllers, and the front-end interface card communicates with a host. When the first storage controller is in an abnormal state, the front-end interface card selects a second storage controller from the storage controllers for the host to process an access request of the host.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: January 2, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Qiming Xu, Can Chen, Song Yang, Linan Zhou, Dahong Yan, Juntao Yang
  • Patent number: 11829806
    Abstract: An arithmetic processor performs arithmetic processing, and a synchronization processor, including first registers, performs synchronization processing that includes a plurality of processing stages to be processed stepwise. The arithmetic processor sends, to the synchronization processor, setting information to be used in a predetermined processing stage of the synchronization processing, and instruct the synchronization processor to execute the predetermined processing stage for the arithmetic processing. Each of the first registers includes a setting information management area to manage the setting information received from the arithmetic processor, and a destination status area to store a usage state of each of destination registers which are used in a next processing stage following the predetermined processing stage.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: November 28, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Kazuya Yoshimoto, Yuji Kondo
  • Patent number: 11709631
    Abstract: A system includes a processing device, operatively coupled with a memory device, to perform operations including receiving a media access operation command designating a first memory location, and determining whether a first media access operation command designating the first memory location and a second media access operation designating a second memory location are synchronized, after determining that the first and second media access operation commands are not synchronized, determining that the media access operation command is an error flow recovery (ERF) read command, in response to determining that the media access operation command is an ERF read command, determining whether a head command of the first queue is blocked from execution, and in response to determining that the head command is unblocked from execution, servicing the ERF read command from a media buffer maintaining previously written ERF data.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: July 25, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Fangfang Zhu, Jiangli Zhu, Juane Li
  • Patent number: 11687246
    Abstract: The present disclosure generally relates to efficient management of an elastic buffer. Efficient management can be achieved by using an asymmetric asynchronous First In, First Out (FIFO) approach based on normalization of write and read pointers. The normalization is done in accordance with the FIFO depth while keeping a single bit change approach. In order to achieve an asymmetric dynamic ability for part per million (PPM) compensation, a plurality of sub-FIFOs are used for opponent side pointer synchronization. Combining the features allows for creating an asynchronous asymmetric FIFO with pipeline characteristics.
    Type: Grant
    Filed: February 22, 2021
    Date of Patent: June 27, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Yevgeny Lazarev, Elkana Richter, Shay Benisty
  • Patent number: 11645155
    Abstract: A data processing system includes a system interconnect, a first master, and a bridge circuit. The bridge circuit is coupled between the first master and the system interconnect. The bridge circuit is configured to, in response to occurrence of an error in the first master, isolate the first master from the system interconnect, wherein the isolating by the bridge circuit is performed while the first master has one or more outstanding issued write commands to the system interconnect which have not been completed. The bridge circuit is further configured to, after isolating the first master from the system interconnect, complete the one or more outstanding issued write commands while the first master remains isolated from the system interconnect.
    Type: Grant
    Filed: February 22, 2021
    Date of Patent: May 9, 2023
    Assignee: NXP B.V.
    Inventors: Arjun Pal Chowdhury, Nancy Hing-Che Amedeo, Jehoda Refaeli
  • Patent number: 11625331
    Abstract: A cache control apparatus includes a data unit configured to store data on an index-specific basis, a tag unit configured to store, on the index-specific basis, a tag and a flag indicating whether the data has an uncorrectable error, and a control unit configured to refer to the flag, upon detecting a tag hit by performing a read access to the tag unit, to determine whether an uncorrectable error exists in the data corresponding to the tag hit, wherein the control unit performs process scheduling such that the read access to the tag unit and another access to the tag unit are performed simultaneously.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: April 11, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Ryotaro Tokumaru, Masakazu Tanomoto, Taisuke Saiki
  • Patent number: 11620510
    Abstract: Computing resources may be optimally allocated for a multipath neural network using a multipath neural network analyzer that includes an interface and a processing device. The interface receives a multipath neural network. The processing device generates the multipath neural network to include one or more layers of a critical path through the multipath neural network that are allocated a first allocation of computing resources that are available to execute the multipath neural network. The critical path limits throughput of the multipath neural network. The first allocation of computing resources reduces an execution time of the multipath neural network to be less than a baseline execution time of a second allocation of computing resources for the multipath neural network. The first allocation of computing resources for a first layer of the critical path is different than the second allocation of computing resources for the first layer of the critical path.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: April 4, 2023
    Inventors: Behnam Pourghassemi Najafabadi, Joo Hwan Lee, Yang Seok Ki
  • Patent number: 11599269
    Abstract: Reducing file write latency includes receiving incoming data, from a data source, for storage in a file and a target storage location for the incoming data, and determining whether the target storage location corresponds to a cache entry. Based on at least the target storage location not corresponding to a cache entry, the incoming data is written to a block pre-allocated for cache misses and the writing of the incoming data to the pre-allocated block is journaled. The writing of the incoming data is acknowledged to the data source. A process executing in parallel with the above commits the incoming data in the pre-allocated block with the file. Using this parallel process to commit the incoming data in the file removes high-latency operations (e.g., reading pointer blocks from the storage media) from a critical input/output path and results in more rapid write acknowledgement.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: March 7, 2023
    Assignee: VMware, Inc.
    Inventors: Prasanth Jose, Gurudutt Kumar Vyudayagiri Jagannath
  • Patent number: 11586779
    Abstract: An embedded system and method, comprising a processor adapted to execute an instruction of an application program, where the instruction includes an access instruction for a hardware device; a memory adapted to store the instruction of the application program; and a physical memory protection apparatus coupled to the processor and the memory, where the access instruction accesses the hardware device through the physical memory protection apparatus.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: February 21, 2023
    Assignee: Alibaba Group Holding Limited
    Inventor: Xiaoxia Cui
  • Patent number: 11467736
    Abstract: Implementations for dropped write detection and correction are described. An example method includes receiving a write command comprising data and associated metadata; increasing a value of a monotonic counter; generating updated metadata by adding the counter value to the metadata; atomically writing (a) the data and a first instance of the updated metadata to a first storage device, and (b) a second instance of the updated metadata to a second storage device; receiving a read request for the data; reading the first instance of the updated metadata from the first storage device; reading the second instance of the updated metadata from a second storage device; comparing the instances of metadata and the counter values within each instance of metadata; determining whether the first counter value matches the second counter value; and determining whether a dropped write has occurred based on whether the first counter values matches the second counter value.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: October 11, 2022
    Assignee: Pavillon Data Systems, Inc.
    Inventors: Vaibhav Nipunage, Unmesh Rathi, Sundar Kanthadai, Sandeep Dhavale
  • Patent number: 11431330
    Abstract: In an embodiment, a system includes a slave circuit configured to receive an external clock signal from a master circuit, the slave circuit comprising first and second peripherals configured to receive respective clock signals obtained from the external clock signal, wherein the master circuit is configured to send to the slave circuit the external clock signal according to two different timing modes, wherein the slave circuit comprises a logic circuit configured to provide a locking signal to the first peripheral circuit when the logic circuit detects a given operating mode of the slave circuit, wherein the master circuit is configured to send the external clock signal according to a first timing mode before receipt of the locking signal, and wherein the master circuit is configured, following upon receipt of the locking signal, to send the external clock signal according to a second timing mode different from the first timing mode.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: August 30, 2022
    Assignee: STMicroelectronics S.r.l.
    Inventors: Liliana Arcidiacono, Santi Carlo Adamo
  • Patent number: 11115013
    Abstract: In an embodiment, a system includes a slave circuit configured to receive an external clock signal from a master circuit, the slave circuit comprising first and second peripherals configured to receive respective clock signals obtained from the external clock signal, wherein the master circuit is configured to send to the slave circuit the external clock signal according to two different timing modes, wherein the slave circuit comprises a logic circuit configured to provide a locking signal to the first peripheral circuit when the logic circuit detects a given operating mode of the slave circuit, wherein the master circuit is configured to send the external clock signal according to a first timing mode before receipt of the locking signal, and wherein the master circuit is configured, following upon receipt of the locking signal, to send the external clock signal according to a second timing mode different from the first timing mode.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: September 7, 2021
    Assignee: STMicroelectronics S.r.l.
    Inventors: Liliana Arcidiacono, Santi Carlo Adamo
  • Patent number: 11094372
    Abstract: A semiconductor memory and a partial writing method are provided. The semiconductor memory includes a memory bank, a write amplifier circuit, a plurality of input/output pins and a plurality of address pins. The write amplifier circuit is coupled to the memory bank through a plurality of internal input/output lines. The plurality of input/output pins are coupled to the write amplifier circuit through a plurality of input lines. A part of plurality of address pins receive a column address instruction, and at least one of another part of the plurality of address pins receive an operation code. The semiconductor memory determines a part of the internal input/output lines for transmitting input data according to the operation code, and operates the write amplifier circuit to perform a partial writing mode according to the operation code so as to write the input data into the memory bank according to the column address instruction.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: August 17, 2021
    Assignee: Powerchip Semiconductor Manufacturing Corporation
    Inventors: Yasuhiro Konishi, Yasuji Koshikawa
  • Patent number: 11061817
    Abstract: Data memory node (400) for ESM (Emulated Share Memory) architectures (100, 200), comprising a data memory module (402) containing data memory for storing input data therein and retrieving stored data therefrom responsive to predetermined control signals, a multi-port cache (404) for the data memory, said cache being provided with at least one read port (404A, 404B) and at least one write port (404C, 404D, 404E), said cache (404) being configured to hold recently and/or frequently used data stored in the data memory (402), and an active memory unit (406) at least functionally connected to a plurality of processors via an interconnection network (108), said active memory unit (406) being configured to operate the cache (404) upon receiving a multioperation reference (410) incorporating a memory reference to the data memory of the data memory module from a number of processors of said plurality, wherein responsive to the receipt of the multioperation reference the active memory unit (406) is configured to proces
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: July 13, 2021
    Assignee: Teknologian tutkimuskeskus VTT Oy
    Inventor: Martti Forsell
  • Patent number: 10949292
    Abstract: A requester issues a request specifying a target address indicating an addressed location in a memory system. A completer responds to the request. Tag error checking circuitry performs a tag error checking operation when the request issued by the requester is a tag-error-checking request specifying an address tag. The tag error checking operation comprises determining whether the address tag matches an allocation tag stored in the memory system associated with a block of one or more addresses comprising the target address specified by the tag-error-checking request. The requester and the completer communicate via a memory interface having at least one data signal path to exchange read data or write data between the requester and the completer; and at least one tag signal path, provided in parallel with the at least one data signal path, to exchange address tags or allocation tags between the requester and the completer.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: March 16, 2021
    Assignee: Arm Limited
    Inventors: Bruce James Mathewson, Phanindra Kumar Mannava, Michael Andrew Campbell, Alexander Alfred Hornung, Alex James Waugh, Klas Magnus Bruce, Richard Roy Grisenthwaite
  • Patent number: 10877897
    Abstract: In one embodiment, a method includes: in response to a sub-cacheline memory access request, receiving a data-line from a memory coupled to a processor; receiving tag information included in metadata associated with the data-line from the memory; determining, in a memory controller, whether a first tag identifier of the tag information matches a tag portion of an address of the memory line associated with the sub-cacheline memory access request, and in response to determining a match, storing a first portion of the data-line associated with the first tag identifier in a cache line of a cache of the processor, the first portion a sub-cacheline width. This method allows data lines stored in memory associated with multiple different tag metadata to be divided into multiple cachelines comprising the sub-cacheline data associated with a particular metadata address tag. Other embodiments are described and claimed.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: December 29, 2020
    Assignee: Intel Corporation
    Inventors: David M. Durham, Ron Gabor, Rajat Agarwal
  • Patent number: 10628320
    Abstract: Embodiments of the present disclosure support implementation of a Level-1 (L1) cache in a microprocessor based on independently accessed data and tag arrays. Presented implementations of L1 cache do not require any stall pipeline mechanism for stalling execution of instructions, leading to improved microprocessor performance. A data array in the cache is interfaced with one or more data index queues that comprise, upon occurrence of a conflict between at least one instruction requesting access to the data array and at least one other instruction that accessed the data array, at least one data index for accessing the data array associated with the at least one instruction. A tag array in the cache is interfaced with a tag queue that stores one or more tag entries associated with one or more data outputs read from the data array based on accessing the data array.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: April 21, 2020
    Assignee: Synopsys, Inc.
    Inventor: Thang Tran
  • Patent number: 10481867
    Abstract: A data input/output unit is provided. The data input/output unit which is connected to a processor, and receives and outputs data in sequence based on a first schedule includes a first input first output (FIFO) memory connected to an external unit and the processor; and a reordering buffer connected to one side of the FIFO memory, and store data outputted from, or inputted to, the FIFO memory in a plurality of buffer regions in sequence, and output data stored in one of the plurality of buffer regions based on a control signal provided from the processor.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: November 19, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae-un Park, Jong-hun Lee, Ki-seok Kwon, Dong-kwan Suh, Kang-jin Yoon, Jung-uk Cho
  • Patent number: 10360031
    Abstract: Fast unaligned memory access. In accordance with a first embodiment of the present invention, a computing device includes a load queue memory structure configured to queue load operations and a store queue memory structure configured to queue store operations. The computing device includes also includes at least one bit configured to indicate the presence of an unaligned address component for an entry of said load queue memory structure, and at least one bit configured to indicate the presence of an unaligned address component for an entry of said store queue memory structure. The load queue memory may also include memory configured to indicate data forwarding of an unaligned address component from said store queue memory structure to said load queue memory structure.
    Type: Grant
    Filed: October 21, 2011
    Date of Patent: July 23, 2019
    Assignee: Intel Corporation
    Inventors: Mandeep Singh, Mohammad Abdallah
  • Patent number: 10261705
    Abstract: Data verification includes obtaining a logical block address (LBA), which is associated with a data block of a file, to be verified. Data verification further includes reading, from a solid state drive (SSD) comprising one or more flash storage elements, data content that corresponds to the LBA. Data verification further includes determining whether an access latency associated with the reading of the data content exceeds a threshold. Data verification further includes, in the event that the access latency does not exceed the threshold, evaluating the data content to determine whether it is consistently stored in a physical memory included in the SSD. Data verification further includes, in the event that the data content is determined not to be consistently stored in the physical memory included in the SSD, recording an indication indicating that the LBA is not successfully verified.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: April 16, 2019
    Assignee: Alibaba Group Holding Limited
    Inventor: Shu Li
  • Patent number: 10209991
    Abstract: A system and method for reducing latencies of main memory data accesses are described. A non-blocking load (NBLD) instruction identifies an address of requested data and a subroutine. The subroutine includes instructions dependent on the requested data. A processing unit verifies that address translations are available for both the address and the subroutine. The processing unit continues processing instructions with no stalls caused by younger-in-program-order instructions waiting for the requested data. The non-blocking load unit performs a cache coherent data read request on behalf of the NBLD instruction and requests that the processing unit perform an asynchronous jump to the subroutine upon return of the requested data from lower-level memory.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: February 19, 2019
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Meenakshi Sundaram Bhaskaran, Elliot H. Mednick, David A. Roberts, Anthony Asaro, Amin Farmahini-Farahani
  • Patent number: 10067713
    Abstract: In a data processing system implementing a weak memory model, a lower level cache receives, from a processor core, a plurality of copy-type requests and a plurality of paste-type requests that together indicate a memory move to be performed. The lower level cache also receives, from the processor core, a barrier request that requests enforcement of ordering of memory access requests prior to the barrier request with respect to memory access requests after the barrier request. In response to the barrier request, the lower level cache enforces a barrier indicated by the barrier request with respect to a final paste-type request ending the memory move but not with respect to other copy-type requests and paste-type requests in the memory move.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: September 4, 2018
    Assignee: International Business Machines Corporation
    Inventors: Bradly G. Frey, Guy L. Guthrie, Cathy May, William J. Starke, Derek E. Williams
  • Patent number: 10013270
    Abstract: Embodiments relate to application-level initiation of processor parameter adjustment. An aspect includes receiving, by a hypervisor in a computer system from an application running on the computer system, a request to adjust an operating parameter of a processor of the computer system. Another aspect includes determining an adjusted value for the operating parameter during execution of the application by the hypervisor. Another aspect includes setting the operating parameter in a parameter register of the processor to the adjusted value by the hypervisor. Yet another aspect includes executing the application according to the parameter register of the processor.
    Type: Grant
    Filed: December 3, 2015
    Date of Patent: July 3, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Giles R. Frazier, Michael Karl Gschwind
  • Patent number: 9910857
    Abstract: Methods and systems for data management are disclosed. With embodiments of the present disclosure, data files originating from the same source data can be de-duplicated. One such method comprises calculating one or more of a first characteristic value for first data in a first format, and one or more second characteristic values for one or more data in one or more second formats into which the first data can be converted, said characteristic value uniquely representing an arrangement characteristic of at least part of bits of data in a particular format. The method also includes storing one of the first data and the second data in response to one of the calculated characteristic values being the same as a stored characteristic value corresponding to a second data.
    Type: Grant
    Filed: April 28, 2014
    Date of Patent: March 6, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Peng Hui Jiang, Pi Jun Jiang, Xi Ning Wang, Liang Xue, Wen Yin
  • Patent number: 9842047
    Abstract: A storage device controller addresses consecutively-addressed portions of incoming data to consecutive data tracks on a storage medium and writes the consecutively-addressed portions to the consecutive data tracks in a non-consecutive track order. In one implementation, the storage device controller reads the data back from the consecutive data tracks in a consecutive address order in a single sequential read operation.
    Type: Grant
    Filed: April 16, 2015
    Date of Patent: December 12, 2017
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: Kaizhong Gao, Wenzhong Zhu, Tim Rausch, Edward Gage
  • Patent number: 9753858
    Abstract: A system and method for efficient cache data access in a large row-based memory of a computing system. A computing system includes a processing unit and an integrated three-dimensional (3D) dynamic random access memory (DRAM). The processing unit uses the 3D DRAM as a cache. Each row of the multiple rows in the memory array banks of the 3D DRAM stores at least multiple cache tags and multiple corresponding cache lines indicated by the multiple cache tags. In response to receiving a memory request from the processing unit, the 3D DRAM performs a memory access according to the received memory request on a given cache line indicated by a cache tag within the received memory request. Rather than utilizing multiple DRAM transactions, a single, complex DRAM transaction may be used to reduce latency and power consumption.
    Type: Grant
    Filed: November 30, 2011
    Date of Patent: September 5, 2017
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Gabriel H. Loh, Mark D. Hill
  • Patent number: 9612833
    Abstract: Technologies are presented that optimize data processing cost and efficiency. A computing system may comprise at least one processing element; a memory communicatively coupled to the at least one processing element; at least one compressor-decompressor communicatively coupled to the at least one processing element, and communicatively coupled to the memory through a memory interface; and a cache fabric comprising a plurality of distributed cache banks communicatively coupled to each other, to the at least one processing element, and to the at least one compressor-decompressor via a plurality of nodes. In this system, the at least one compressor-decompressor and the cache fabric are configured to manage and track uncompressed data of variable length for data requests by the processing element(s), allowing usage of compressed data in the memory.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: April 4, 2017
    Assignee: Intel Corporation
    Inventors: Altug Koker, Hong Jiang, James M. Holland
  • Patent number: 9600183
    Abstract: Techniques and mechanisms for determining comparison information at a memory device. In an embodiment, the memory device receives from a memory controller signals that include or otherwise indicate an address corresponding to a memory location of the memory device. Where it is determined that the signals indicate a compare operation, the memory device retrieves data stored at the memory location, and performs a comparison of the data to a reference data value that is included in or otherwise indicated by the received signals. The memory device sends to the memory controller information representing a result of the comparison. In another embodiment, a memory controller provides signals to control a compare operation by such a memory device.
    Type: Grant
    Filed: September 22, 2014
    Date of Patent: March 21, 2017
    Assignee: Intel Corporation
    Inventors: Shigeki Tomishima, Shih-Lien L. Lu
  • Patent number: 9454482
    Abstract: An apparatus for processing cache requests in a computing system is disclosed. The apparatus may include a single-port memory, a dual-port memory, and a control circuit. The single-port memory may be store tag information associated with a cache memory, and the dual-port memory may be configured to store state information associated with the cache memory. The control circuit may be configured to receive a request which includes a tag address, access the tag and state information stored in the single-port memory and the dual-port memory, respectively, dependent upon the received tag address. A determination of if the data associated with the received tag address is contained in the cache memory may be made the control circuit, and the control circuit may update and store state information in the dual-port memory responsive to the determination.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: September 27, 2016
    Assignee: Apple Inc.
    Inventors: Harshavardhan Kaushikkar, Muditha Kanchana, Odutola O. Ewedemi
  • Patent number: 9454492
    Abstract: One method includes streaming a data segment to a write buffer corresponding to a virtual page including at least two physical pages. Each physical page is defined within a respective solid-state storage element. The method also includes programming contents of the write buffer to the virtual page, such that a first portion of the data segment is programmed to a first one of the physical pages, and a second portion of the data segment is programmed to a second one of the physical pages.
    Type: Grant
    Filed: December 28, 2012
    Date of Patent: September 27, 2016
    Assignee: LONGITUDE ENTERPRISE FLASH S.A.R.L.
    Inventors: David Flynn, Bert Lagerstedt, John Strasser, Jonathan Thatcher, John Walker, Michael Zappe
  • Patent number: 9372798
    Abstract: A data processing apparatus (2) comprises a first protocol domain A configured to operate under a write progress protocol and a second protocol domain B configured to operate under a snoop progress protocol. A deadlock condition is detected if a write target address for a pending write request issued from the first domain A to the second domain B is the same as a snoop target address or a pending snoop request issued from the second domain B to the first domain A. When the deadlock condition is detected, a bridge (4) between the domains may issue an early response to a selected one of the deadlocked write and snoop requests without waiting for the selected request serviced. The early response indicates to the domain that issued the selected request that the selected request has been serviced, enabling the other request to be serviced by the issuing domain.
    Type: Grant
    Filed: March 2, 2012
    Date of Patent: June 21, 2016
    Assignee: ARM Limited
    Inventors: William Henry Flanders, Ramamoorthy Guru Prasadh, Ashok Kumar Tummala, Jamshed Jalal, Phanindra Kumar Mannava
  • Patent number: 9335947
    Abstract: Embodiments relate to an inter-processor memory. An aspect includes a plurality of memory banks, each of the plurality of memory banks comprising a respective plurality of parallel memory modules, wherein a number of the plurality of memory banks is equal to a number of read ports of the inter-processor memory, and a number of parallel memory modules within a memory bank is equal to a number of write ports of the inter-processor memory. Another aspect includes each memory bank corresponding to a single respective read port of the inter-processor memory, and wherein, within each memory bank, each memory module of the plurality of parallel memory modules is writable in parallel by a single respective write port of the inter-processor memory.
    Type: Grant
    Filed: June 30, 2014
    Date of Patent: May 10, 2016
    Assignee: RAYTHEON COMPANY
    Inventors: Pen C. Chien, Frank N. Cheung, Kuan Y. Huang
  • Patent number: 9299429
    Abstract: A nonvolatile memory device includes a buffer memory, a read circuit configured to read first data stored in the buffer memory in a first read operation, and a write circuit configured to write second data in the buffer memory in a first write operation, wherein the first write operation is performed when a first internal write command is generated during the first read operation.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: March 29, 2016
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yong-Jun Lee, Hoi-Ju Chung, Yong-Jin Kwon, Hyo-Jin Kwon, Eun-Hye Park
  • Patent number: 9202552
    Abstract: Dual port static random access memory (SRAM) bitcell structures with improve symmetry in access transistors physical placement are provided. The bitcell structures may include, for example, two pairs of parallel pull-down transistors. The bitcell structures may also include pass-gate transistors PGLA and PGRA forming a first port, and pass-gate transistors PGLB and PGRB forming a second port. The pass-gate transistors PGLA and PGLB may be adjacent one another and a first side of the bitcell structure, and pass-gate transistors PGRA and PGRB may be adjacent one another and a second side of the bitcell structure. Each of the pass-gate transistors PGLA and PGLB may be connected with one of the pull-down transistors of one of the pairs of parallel pull-down transistors. Similarly, each of the pass-gate transistors PGRA and PGRB may be connected with one of the pull-down transistors of the other pair of parallel pull-down transistors.
    Type: Grant
    Filed: December 13, 2013
    Date of Patent: December 1, 2015
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Bipul C. Paul, Randy W. Mann, Sangmoon J. Kim
  • Patent number: 9128850
    Abstract: A multi-ported memory that supports multiple read and write accesses is described. The multi-ported memory may include a number of read/write ports that is greater than the number of read/write ports of each memory bank of the multi-ported memory. The multi-ported memory allows for read operation(s) and write operation(s) to be received during the same clock cycle. In the event that an incoming write operation is blocked by read operation(s), data for that write operation may be stored in one of a plurality of cache banks included in the multi-port memory. The cache banks are accessible to both write and read operations. In the event than the write operation is not blocked by read operation(s), a determination is made as to whether data for that incoming write operation is stored in the memory bank targeted by that incoming write operation or in one of the cache banks.
    Type: Grant
    Filed: December 17, 2012
    Date of Patent: September 8, 2015
    Assignee: Broadcom Corporation
    Inventors: Weihuang Wang, Chien-Hsien Wu
  • Patent number: 9063794
    Abstract: A computer system includes: a main storage unit, a processing executing unit sequentially executing processing to be executed on virtual processors; a level-1 cache memory shared among the virtual processors; a level-2 cache memory including storage areas partitioned based on the number of the virtual processors, the storage areas each (i) corresponding to one of the virtual processors and (ii) holding the data to be used by the corresponding one of the virtual processors; a context memory holding a context item corresponding to the virtual processor; a virtual processor control unit saving and restoring a context item of one of the virtual processors; a level-1 cache control unit; and a level-2 cache control unit.
    Type: Grant
    Filed: October 4, 2012
    Date of Patent: June 23, 2015
    Assignee: SOCIONEXT INC.
    Inventors: Teruyuki Morita, Yoshihiro Koga, Kouji Nakajima
  • Patent number: 9043489
    Abstract: A method begins by a router receiving data for storage and interpreting the data to determine whether the data is to be forwarded or error encoded. The method continues with the router obtaining a routing table when the data is to be error encoded. Next, the method continues with the router selecting a routing option from the plurality of routing options and determining error coding dispersal storage function parameters based on the routing option. Next, the method continues with the router encoding the data based on the error coding dispersal storage function parameters to produce a plurality of sets of encoded data slices. Next, the method continues with the router outputting at least some of the encoded data slices of a set of the plurality of sets of encoded data slices to an entry point of the routing option.
    Type: Grant
    Filed: August 4, 2010
    Date of Patent: May 26, 2015
    Assignee: Cleversafe, Inc.
    Inventors: Gary W. Grube, Timothy W. Markison
  • Patent number: 8977800
    Abstract: Provided is a multi-port cache memory apparatus and a method of the multi-port cache memory apparatus. The multi-port memory apparatus may divide an address space into address regions and allocate the divided memory regions to cache banks, thereby preventing the concentration of access to a particular cache.
    Type: Grant
    Filed: January 31, 2012
    Date of Patent: March 10, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Moo-Kyoung Chung, Soo-Jung Ryu, Ho-Young Kim, Woong Seo, Young-Chul Cho
  • Patent number: 8966180
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: March 1, 2013
    Date of Patent: February 24, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8954674
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: February 10, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8914649
    Abstract: A computing device (101, 400, 500) has a processor (401) and at least one peripheral device port (106, 107, 108, 109, 410-1 to 410-5). The processor (401) is configured to selectively power the at least one peripheral device port (106, 107, 108, 109, 410-1 to 410-5) when the processor (401) is in a sleep state (302, 303, 304, 305, 306) according to at least one setting stored by firmware (405) of the processor (401).
    Type: Grant
    Filed: February 9, 2009
    Date of Patent: December 16, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Chi W. So, Binh T. Truong, Luke Mulcahy
  • Patent number: 8732400
    Abstract: Interconnect circuitry for a data processing apparatus is disclosed. The interconnect circuitry is configured to provide data routes via which at least one initiator device may access at least one recipient device.
    Type: Grant
    Filed: October 5, 2010
    Date of Patent: May 20, 2014
    Assignee: ARM Limited
    Inventors: Peter Andrew Riocreux, Bruce James Mathewson, Christopher William Laycock, Richard Roy Grisenthwaite
  • Patent number: 8732384
    Abstract: A device and methods are provided for accessing memory. In one embodiment, a method includes receiving a request for data stored in a device, checking a local memory for data based on the request to determine if one or more blocks of data associated with the request are stored in the local memory, and generating a memory access request for one or more blocks of data stored in a memory of the device based when one or more blocks of data are not stored in the local memory. In one embodiment, data stored in memory of the device may be arranged in a configuration to include a plurality of memory access units each having adjacent lines of pixel data to define a single line of memory within the memory access units. Memory access units may be configured based on memory type and may reduce the number of undesired pixels read.
    Type: Grant
    Filed: July 21, 2010
    Date of Patent: May 20, 2014
    Assignee: CSR Technology Inc.
    Inventors: Eran Scharam, Costia Parfenyev, Liron Ain-Kedem, Ophir Turbovich, Tuval Berler
  • Patent number: 8677070
    Abstract: According to an aspect of the embodiment, an FP includes a plurality of entries which holds requests to be processed, and each of the plurality of entries includes a requested flag indicating that data transfer is once requested. An FP-TOQ holds information indicating an entry holding the oldest request. A data transfer request prevention determination circuit checks the requested flag of a request to be processed and the FP-TOQ, and when a transfer request of data as a target of the request to be processed has already been issued and the entry holding the request to be processed is not the entry indicated by the FP-TOQ, transmits a signal which prevents the transfer request of the data to a data transfer request control circuit. Even when a cache miss occurs in a primary cache RAM, the data transfer request control circuit does not issue a data transfer request when the signal which prevents the transfer request is received.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: March 18, 2014
    Assignee: Fujitsu Limited
    Inventor: Naohiro Kiyota
  • Patent number: 8671232
    Abstract: A system and method for dynamically migrating stash transactions include first and second processing cores, an input/output memory management unit (IOMMU), an IOMMU mapping table, an input/output (I/O) device, a stash transaction migration management unit (STMMU), and an operating system (OS) scheduler. The first core executes a first thread associated with a frame manager. The OS scheduler migrates the first thread from the first core to the second core and generates pre-empt notifiers to indicate scheduling-out and scheduling-in of the first thread from the first core and to the second core. The STMMU uses the pre-empt notifiers to enable dynamic stash transaction migration.
    Type: Grant
    Filed: March 7, 2013
    Date of Patent: March 11, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Vakul Garg, Varun Sethi
  • Patent number: 8661200
    Abstract: Disclosed herein is a channel controller for a multi-channel cache memory, and a method that includes receiving a memory address associated with a memory access request to a main memory of a data processing system; translating the memory address to form a first access portion identifying at least one partition of a multi-channel cache memory, and at least one further access portion, where the at least one partition includes at least one channel; and applying the at least one further access portion to the at least one channel of the multi-channel cache memory.
    Type: Grant
    Filed: February 5, 2010
    Date of Patent: February 25, 2014
    Assignee: Nokia Corporation
    Inventors: Jari Nikara, Eero Aho, Kimmo Kuusilinna
  • Publication number: 20140040542
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Application
    Filed: October 8, 2013
    Publication date: February 6, 2014
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8639884
    Abstract: Systems and methods are disclosed for multi-threading computer systems. In a computer system executing multiple program threads in a processing unit, a first load/store execution unit is configured to handle instructions from a first program thread and a second load/store execution unit is configured to handle instructions from a second program thread. When the computer system executing a single program thread, the first and second load/store execution units are reconfigured to handle instructions from the single program thread, and a Level 1 (L1) data cache is reconfigured with a first port to communicate with the first load/store execution unit and a second port to communicate with the second load/store execution unit.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: January 28, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventor: Thang M. Tran