Input/output Data Buffering Patents (Class 710/52)
-
Patent number: 8892716Abstract: In one embodiment, a latency value is determined for an input/output IO request in a host computer of a plurality of host computers based on an amount of time the IO request spent in the host computer's issue queue. The issue queue of the host computer is used to transmit IO requests to a storage system shared by the plurality of host computers. The method determines a host specific value assigned to the host computer based in proportion on a number of shares assigned to the host in a quality of service policy for IO requests. The size for the host computer's issue queue is determined based on the latency value and the host specific value to control a number of IO requests that are added to the host computer's issue queue where other hosts in the plurality of hosts independently determine respective sizes for respective issue queues.Type: GrantFiled: March 27, 2014Date of Patent: November 18, 2014Assignee: VMware, Inc.Inventors: Ajay Gulati, Irfan Ahmad
-
Patent number: 8886741Abstract: A method according to one embodiment includes the operations of configuring a primary receive queue to designate a first plurality of buffers; configuring a secondary receive queue to designate a second plurality of buffers, wherein said primary receive queue is sized to accommodate a first network traffic data rate and said secondary receive queue is sized to provide additional accommodation for burst network traffic data rates; selecting a buffer from said primary receive queue, if said primary receive queue has buffers available, otherwise selecting a buffer from said secondary receive queue; transferring data from a network controller to said selected buffer; indicating that said transferring to said selected buffer is complete; reading said data from said selected buffer; and returning said selected buffer, after said reading is complete, to said primary receive queue if said primary receive queue has space available for the selected buffer, otherwise returning said selected buffer to said secondary receType: GrantFiled: June 21, 2011Date of Patent: November 11, 2014Assignee: Intel CorporationInventors: Yadong Li, Linden Cornett
-
Patent number: 8886871Abstract: An apparatus and method of page program operation is provided. When performing a page program operation with a selected memory device, a memory controller loads the data into the page buffer of one selected memory device and also into the page buffer of another selected memory device in order to store a back-up copy of the data. In the event that the data is not successfully programmed into the memory cells of the one selected memory device, then the memory controller recovers the data from the page buffer of the other memory device. Since a copy of the data is stored in the page buffer of the other memory device, the memory controller does not need to locally store the data in its data storage elements.Type: GrantFiled: September 30, 2011Date of Patent: November 11, 2014Assignee: Conversant Intellectual Property Management IncorporatedInventors: Hong Beom Pyeon, Jin-Ki Kim, HakJune Oh
-
Patent number: 8886891Abstract: Accessing a shared buffer can include receiving an identifier associated with a buffer from a sending process, requesting one or more attributes corresponding to the buffer based on the received identifier, mapping at least a first page of the buffer in accordance with the one or more requested attributes, and accessing an item of data stored in the buffer by the sending process. The identifier also can comprise a unique identifier. Further, the identifier can be passed to one or more other processes. Additionally, the one or more requested attributes can include at least one of a pointer to a memory location and a property describing the buffer.Type: GrantFiled: March 26, 2008Date of Patent: November 11, 2014Assignee: Apple Inc.Inventors: Kenneth Christian Dyke, Jeremy Todd Sandmel, Geoff Stahl, John Kenneth Stauffer
-
Patent number: 8886845Abstract: A method, computer program product, and computing system for associating a first I/O scheduling queue with a first process accessing a storage network. The first I/O scheduling queue is configured to receive a plurality of first process I/O requests. A second I/O scheduling queue is associated with a second process accessing the storage network. The second I/O scheduling queue is configured to receive a plurality of second process I/O requests.Type: GrantFiled: April 2, 2014Date of Patent: November 11, 2014Assignee: EMC CorporationInventors: Roy E. Clark, Michel F. Fisher, Humberto Rodriguez
-
Patent number: 8885472Abstract: The systems and methods described herein allow for the scaling of output-buffered switches by decoupling the data path from the control path. Some embodiment of the invention include a switch with a memory management unit (MMU), in which the MMU enqueues data packets to an egress queue at a rate that is less than the maximum ingress rate of the switch. Other embodiments include switches that employ pre-enqueue work queues, with an arbiter that selects a data packet for forwarding from one of the pre-enqueue work queues to an egress queue.Type: GrantFiled: June 15, 2012Date of Patent: November 11, 2014Assignee: Broadcom CorporationInventors: Bruce Kwan, Brad Matthews, Puneet Agarwal
-
Patent number: 8885202Abstract: An image forming apparatus has a read unit to read a document image and generate image data of the document image, a memory management unit to manage a storage unit which is segmented into storage regions, and an engine to write the image data generated by the read unit to the storage unit. The engine acquires setting information related to writing of the image data from the memory management unit, and writes the image data to the storage unit based on the setting information that is acquired.Type: GrantFiled: June 26, 2009Date of Patent: November 11, 2014Assignee: Ricoh Company, Ltd.Inventor: Hidenori Shindoh
-
Patent number: 8880761Abstract: An efficient low latency buffer, and method of operation, is described. The efficient low latency buffer may be used as a bi-directional memory buffer in an audio playback device to buffer both output and input data. An application processor coupled to the bi-directional memory buffer is responsive to an indication to write data to the bi-directional memory buffer reads a defined size of input data from the bi-directional memory buffer. The input data read from the bi-directional memory buffer is replaced with output data of the defined size. In response to a mode-change signal, the defined size of data is changed that is read and written from and to the bi-directional memory buffer. The buffer may allow the application processor to enter a low-powered sleep mode more frequently.Type: GrantFiled: February 22, 2013Date of Patent: November 4, 2014Assignee: BlackBerry LimitedInventors: Scott Edward Bulgin, Cyril Martin, Bengt Stefan Gustavsson
-
Patent number: 8880780Abstract: An apparatus and method are provided for using a page buffer of a memory device as a temporary cache for data. A memory controller writes data to the page buffer and later reads out the data without programming the data into the memory cells of the memory device. This allows the memory controller to use the page buffer as temporary cache so that the data does not have to occupy space within the memory controller's local data storage elements. Therefore, the memory controller can use the space in its own storage elements for other operations.Type: GrantFiled: August 23, 2011Date of Patent: November 4, 2014Assignee: Conversant Intellectual Property Management IncorporatedInventors: Hong Beom Pyeon, Kin-Ki Kim, HakJune Oh
-
Patent number: 8880760Abstract: In one aspect a memory module storing a plurality of packets is provided. A self organizing heap contains elements associated with each of the packets. The self organizing heap reorders the packets based on packet passing rules. In another aspect, a plurality of elements associated with packets is provided. Each element includes a state machine. The state machine operates in accordance with packet passing rules. The state machine reorders the packets by selective swapping of adjacent elements.Type: GrantFiled: April 27, 2012Date of Patent: November 4, 2014Assignee: Hewlett-Packard Development Company, L.P.Inventors: Derek Alan Sherlock, Matthew B Lovell
-
Publication number: 20140325125Abstract: A method of transmitting atomic write data from a host to a data storage device in a data system includes; communicating a header identifying a plurality of data chunks associated with an atomic write operation from the host to the data storage device and storing the header in a buffering area designated in the data storage device, then successively communicating the plurality of data chunks from the host to the data storage device and successively storing the each one of the plurality of data chunks in the buffering area, and then storing write data including at least the plurality of data chunks in a first area of storage media in the data storage device.Type: ApplicationFiled: April 28, 2014Publication date: October 30, 2014Applicant: Samsung Electronics Co., Ltd.Inventors: MOON SANG KWON, JONG HYUN YOON, HYUNG JIN IM, SANG HOON CHOI
-
Patent number: 8874810Abstract: Efficient and convenient storage systems and methods are presented. In one embodiment a storage system includes a plurality of storage nodes and a master controller. The storage nodes store information. The storage node includes an upstream communication buffer which is locally controlled at the storage node to facilitate resolution of conflicts in upstream communications. The master controller controlls the flow of traffic to the node based upon constraints of the upstream communication buffer. In one embodiment, communication between the master controller and the node has a determined maximum latency. The storage node can be coupled to the master controller in accordance with a chain memory configuration.Type: GrantFiled: November 21, 2008Date of Patent: October 28, 2014Assignee: Spansion LLCInventors: Roger Dwain Isaac, Seiji Miura
-
Patent number: 8874809Abstract: An assembly where a number of receivers receiving packets for storing in queues in a storage and a means for de-queuing data from the storage. A controller determines addresses for the storage, the address being determined on the basis of at least a fill level of the queue(s), where information relating to de-queues addresses is only read-out when the fill-level(s) exceed a limit so as to not spend bandwidth on this information before it is required.Type: GrantFiled: December 6, 2010Date of Patent: October 28, 2014Assignee: Napatech A/SInventor: Peter Korger
-
Patent number: 8874808Abstract: The present invention provides a system and method for controlling data entries in a hierarchical buffer system. The system includes an integrated circuit device with a memory core, a high speed upstream data bus, and a plurality of 1st tier buffers that receive data from the memory. The system further includes a 2nd tier transfer buffer spanning a plurality of asynchronous timing domains that delivers the data onto the upstream data bus to minimize gaps in a data transfer. The method includes managing the buffers to allow data to flow from a plurality of 1st tier buffers through a 2nd tier transfer buffer, and delivering the data onto a high speed data bus with pre-determined timing in a manner which minimizes latency to the extent that the returning read data beats are always transmitted contiguously with no intervening gaps.Type: GrantFiled: January 18, 2012Date of Patent: October 28, 2014Assignee: International Business Machines CorporationInventors: Steven J. Hnatko, Gary A. Van Huben
-
Patent number: 8868801Abstract: A novel and efficient method is described that creates a monolithic high capacity Packet Engine (PE) by connecting N lower capacity Packet Engines (PEs) via a novel Chip-to-Chip (C2C) interface. The C2C interface is used to perform functions, such as memory bit slicing and to communicate shared information, and enqueue/dequeue operations between individual PEs.Type: GrantFiled: October 10, 2013Date of Patent: October 21, 2014Assignee: Altera European Trading Company LimitedInventor: Hartvig Ekner
-
Patent number: 8868799Abstract: This invention controls data transmission from a data source to a sink. The data source buffers the data. The data source signaling to transmit data upon storing a burst amount of data. The data source may include a plurality of data sources. A merge unit merges data by receiving and retransmitting data from each data source which signals to transmit and inserting a source identity block each time the merged data is received from a different source.Type: GrantFiled: January 10, 2013Date of Patent: October 21, 2014Assignee: Texas Instruments IncorporatedInventor: Gary L Swoboda
-
Patent number: 8868800Abstract: Technologies are generally described for methods and systems effective to provide accelerator buffer access. An operating system may allocate a range of addresses in virtual address spaces and a range of addresses in a buffer mapped region of a physical (or main) memory. A request to read from, or write to, data by an application may be read from, or written to, the virtual address space. A memory management unit may then map the read or write requests from the virtual address space to the main or physical memory. Multiple applications may be able to operate as if each application has exclusive access to the accelerator and its buffer. Multiple accesses to the buffer by application tasks may avoid a conflict because the memory controller may be configured to fetch data based on respective application identifiers assigned to the applications. Each application may be assigned a different application identifier.Type: GrantFiled: March 12, 2013Date of Patent: October 21, 2014Assignee: Empire Technology Development LLCInventor: Yan Solihin
-
Publication number: 20140310466Abstract: A multi-processor cache and bus interconnection system. A multi-processor is provided a segmented cache and an interconnection system for connecting the processors to the cache segments. An interface unit communicates to external devices using module IDs and timestamps. A buffer protocol includes a retransmission buffer and method.Type: ApplicationFiled: June 27, 2014Publication date: October 16, 2014Applicant: PACT XPP TECHNOLOGIES AGInventors: Martin Vorbach, Volker Baumgarte, Frank May, Armin Nuckel
-
Publication number: 20140304440Abstract: A method for communicating data between peripheral devices and an embedded processor that includes receiving, at a data buffer unit of the embedded processor, the data from a peripheral device. The method also includes copying data from the data buffer unit into the bridge buffer of the embedded processor as a bridge buffer message. Additionally, the method includes creating, after storing the data as a bridge buffer message, a peripheral device message comprising the bridge buffer message, and sending the peripheral device message to a thread message queue of a subscriber.Type: ApplicationFiled: April 4, 2014Publication date: October 9, 2014Applicant: WILLIAM MARSH RICE UNIVERSITYInventors: Thomas William Barr, Scott Rixner
-
Publication number: 20140297906Abstract: A buffer circuit includes: a register array including registers in a plurality of stages; and a control circuit configured to rearrange a plurality of pieces of received data in the register in a determined transfer order and to control the register array to sequentially output the plurality of pieces of received data as one piece of transfer data when all the received data is stored, wherein the control circuit controls the register array to store stored data in each register in a preceding stage when the register array outputs the received data, and the control circuit determines a write register in accordance with the transfer order when the register array newly stores the received data and controls the register array to store data stored in the write register in a following stage of the write register and to store the new received data in the write register.Type: ApplicationFiled: March 21, 2014Publication date: October 2, 2014Applicant: FUJITSU SEMICONDUCTOR LIMITEDInventor: Ryuji KOJIMA
-
Patent number: 8850059Abstract: Embodiments of the invention include a communication interface and protocol for allowing communication between devices, circuits, integrated circuits and similar electronic components having different communication capacities or clock domains. The interface supports communication between any components having any difference in capacity and over any distance. The interface utilizes request and acknowledge phases and signals and an initiator-target relationship between components that allow each side to throttle the communication rate to an accepted level for each component or achieve a desired bit error rate.Type: GrantFiled: January 12, 2009Date of Patent: September 30, 2014Assignee: Micron Technology, Inc.Inventors: Jeffrey D. Hoffman, Allan R. Bjerke
-
Patent number: 8850089Abstract: A method and apparatus for unified final buffer with pointer-based and page-based scheme for traffic optimization have been disclosed.Type: GrantFiled: June 18, 2010Date of Patent: September 30, 2014Assignee: Integrated Device Technology, Inc.Inventors: Chi-Lie Wang, Jason Z. Mo
-
Patent number: 8848532Abstract: A data processing method and system and relevant devices are provided to improve the processing efficiency of cores. The method includes: storing received packets in a same stream sequentially; receiving a Get_packet command sent by each core; selecting, according to a preset scheduling rule, packets for being processed by each core among the stored packets; receiving a tag switching command sent by each core, where the tag switching command indicates that the core has finished a current processing stage; and performing tag switching for the packets in First In First Out (FIFO) order, and allocating the packets to a subsequent core according to the Get_packet command sent by the subsequent core after completion of the tag switching, so that the packet processing continues until all processing stages are finished. A data processing system and relevant devices are provided. With the present invention, the processing efficiency of cores may be improved.Type: GrantFiled: April 15, 2011Date of Patent: September 30, 2014Assignee: Huawei Technologies Co., Ltd.Inventors: Lingyun Zhi, Linhan Li, Fei Song, Zuolin Ning
-
Patent number: 8843676Abstract: An embodiment of the invention pertains to a method that includes an operating system, program components running on the operating system, and a file system associated with one or more files. Responsive to a write request sent from a specified program component to the operating system, in order to write specified data content to a given file, the method determines whether the write request meets a criterion, which is derived from the identity of at least one of the specified program component, and the given file. If the criterion is met, a message is immediately sent to release the specified program component from a wait state. Data portions of the specified data content are then selectively written to a storage buffer, and subsequently written from the buffer to the given file.Type: GrantFiled: June 27, 2012Date of Patent: September 23, 2014Assignee: International Business Machines CorporationInventors: Logeswaran T. Rajamanickam, Arun Ramakrishnan, Ashrith Shetty, Rohit Shetty
-
Patent number: 8843677Abstract: An embodiment of the invention pertains to a method that includes an operating system, program components running on the operating system, and a file system associated with one or more files. Responsive to a write request sent from a specified program component to the operating system, in order to write specified data content to a given file, the method determines whether the write request meets a criterion, which is derived from the identity of at least one of the specified program component, and the given file. If the criterion is met, a message is immediately sent to release the specified program component from a wait state. Data portions of the specified data content are then selectively written to a storage buffer, and subsequently written from the buffer to the given file.Type: GrantFiled: August 20, 2012Date of Patent: September 23, 2014Assignee: International Business Machines CorporationInventors: Logeswaran T. Rajamanickam, Arun Ramakrishnan, Ashrith Shetty, Rohit Shetty
-
Patent number: 8843675Abstract: A method and system to transfer data from one or more data sources to one or more data sinks using a pipelined buffer interconnect fabric is described.Type: GrantFiled: October 5, 2007Date of Patent: September 23, 2014Assignee: Broadcom CorporationInventor: Scott Krig
-
Patent number: 8843694Abstract: Systems and methods are provided for using page buffers of memory devices connected to a memory controller through a common bus. A page buffer of a memory device is used as a temporary cache for data which is written to the memory cells of the memory device. This can allow the memory controller to use memory devices as temporary caches so that the memory controller can free up space in its own memory.Type: GrantFiled: November 22, 2011Date of Patent: September 23, 2014Assignee: Conversant Intellectual Property Management Inc.Inventors: Hong Beom Pyeon, Jin-Ki Kim, HakJune Oh
-
Publication number: 20140281057Abstract: A system includes a plurality of processors, a message fabric, and a plurality of hardware units. Each of the plurality of processors comprises a plurality of communication FIFOs and has an instruction set including at least one instruction to send a message via at least one of the plurality of communication FIFOs. The message fabric couples the processors via at least some of the plurality of communication FIFOs. Each of the processors is associated with a respective one or more of the hardware units and coupled to each of the associated hardware units via respective hardware unit input and output communication FIFOs. Each of the processors is enabled to send messages to others of the processors via respective processor output communication FIFOs. The respective hardware units associated with each of the processors are enabled to send messages to the associated processor via the respective hardware unit input communication FIFOs.Type: ApplicationFiled: April 17, 2013Publication date: September 18, 2014Applicant: LSI CorporationInventors: Earl T. Cohen, Mark vonGnechten
-
Publication number: 20140281058Abstract: Technologies are generally described for methods and systems effective to provide accelerator buffer access. An operating system may allocate a range of addresses in virtual address spaces and a range of addresses in a buffer mapped region of a physical (or main) memory. A request to read from, or write to, data by an application may be read from, or written to, the virtual address space. A memory management unit may then map the read or write requests from the virtual address space to the main or physical memory. Multiple applications may be able to operate as if each application has exclusive access to the accelerator and its buffer. Multiple accesses to the buffer by application tasks may avoid a conflict because the memory controller may be configured to fetch data based on respective application identifiers assigned to the applications. Each application may be assigned a different application identifier.Type: ApplicationFiled: March 12, 2013Publication date: September 18, 2014Applicant: EMPIRE T ECHNOLOGY DEVELOPMENT, LLCInventor: Yan Solihin
-
Patent number: 8838782Abstract: In a network protocol processing system in which variables of each of TCP transmission processing and TCP reception processing depend on each other, asynchronous parallel processing is realized between a transmission processing block and a reception processing block for updated protocol processing. Specifically, the system includes a high priority queue for transferring control data to be processed with high priority, a low priority queue for control data other than the above control data, and priority control means for distributing the control data to two kinds of queues. When a request for session establishment and the session disconnection of a new TCP session is issued from an application during transmission of TCP data, data related with the session establishment and the session disconnection is notified preferentially through the high priority queue, and other control data is transferred through the low priority queue.Type: GrantFiled: July 2, 2009Date of Patent: September 16, 2014Assignee: NEC CorporationInventors: Masato Yasuda, Kiyohisa Ichino
-
Patent number: 8838879Abstract: Created is transfer order information indicating an order of transfer from multiple memory areas in accordance with an order of logical addresses and memory locations which are specified by read commands. Readout from the multiple memory areas in accordance with the transfer order information is performed by controlling memory controllers in accordance with the created transfer order information.Type: GrantFiled: September 21, 2011Date of Patent: September 16, 2014Assignee: Kabushiki Kaisha ToshibaInventor: Norikazu Yoshida
-
Patent number: 8832336Abstract: A system for increasing the efficiency of data transfer through a serializer-deserializer (SerDes) link, and for reducing data latency caused by differences between arrival times of the data on the SerDes link and the system clock with which the device operates.Type: GrantFiled: January 30, 2010Date of Patent: September 9, 2014Assignee: MoSys, Inc.Inventors: Michael J. Morrison, Jay B. Patel, Philip A. Ferolito, Michael J. Miller
-
Publication number: 20140250246Abstract: A dynamically controllable buffering system includes a data buffer that is communicatively coupled between first and second data interfaces and operable to perform as an elasticity first-in-first-out buffer in a first mode and to perform as a store-and-forward buffer in a second mode. The system also includes a controller that is operable to detect data rates of the first and second data interfaces, to operate the data buffer in the first mode when the first data interface has a data transfer rate that is faster than a data transfer rate of the second data interface, and to operate the data buffer in the second mode when the second data interface has a data transfer rate that is faster than the data transfer rate of the first data interface.Type: ApplicationFiled: March 13, 2013Publication date: September 4, 2014Applicant: LSI CORPORATIONInventors: Richard Solomon, Eugene Saghi, John C. Udell
-
Patent number: 8824080Abstract: Provided is a method for recording data to a tape medium in such a manner as to achieve the easy management of mutually related multiple data pieces. First data and second data continuously received as a file from a higher level apparatus are accumulated in multiple buffer segments in the form of multiple successive data sets. A data structure is determined for each of the accumulated data sets. Management information indicating a result of the determination is added to the data sets, and the data sets and the management information thereof are stored into the tape medium.Type: GrantFiled: January 29, 2010Date of Patent: September 2, 2014Assignee: International Business Machines CorporationInventors: Hiroshi Itagaki, Toshiyuki Shiratori
-
Patent number: 8819311Abstract: Files on a secondary storage are accessed using alternative IO subroutines that buffer IO requests made by a user and mimic the IO subroutines provided by an operating system. The buffer used by the alternative IO subroutines is maintained by the user and not the operating system. User applications are not recompiled or relinked when using the alternative subroutines because the library that provides these subroutines intercepts requests for buffered IO made by user applications to the operating system's IO subroutines and replaces the requests with calls to the alternative IO subroutines that utilize the buffer maintained by the user.Type: GrantFiled: May 23, 2007Date of Patent: August 26, 2014Assignee: RPX CorporationInventor: Cheng Liao
-
Patent number: 8819309Abstract: Buffer circuitry 14 is provided with shared buffer circuitry 20 which stores, in order of reception time, data transaction requests received from one or more data transaction sources. The buffer circuitry 14 operates in either a bypass mode or a non-bypass mode. When operating in the bypass mode, any low latency data transaction requests stored within the shared buffer circuitry are selected in order for output in preference to data transaction requests that are not low latency data transaction requests. In the non-bypass mode, transactions (whether or not they are low latency transactions) are output from the shared buffer circuitry 20 in accordance with the order in which they are received into the shared buffer circuitry 20. The switch between the bypass mode and the non-bypass mode is made in dependence upon comparison of a detected rate of output of low latency data transaction requests compared to a threshold value.Type: GrantFiled: June 14, 2013Date of Patent: August 26, 2014Assignee: ARM LimitedInventors: Alistair Crone Bruce, Andrew David Tune
-
Patent number: 8819312Abstract: Systems and methods are provided for a first-in-first-out buffer. A buffer includes a first sub-buffer configured to store data received from a buffer input, and a second sub-buffer. The second sub-buffer is configured to store data received from either the buffer input or the first sub-buffer and to output data to a buffer output in a same order as that data is received at the buffer input. Buffer control logic is configured to selectively route data from the buffer input or the first sub-buffer to the second sub-buffer so that data received at the buffer input is available to be output from the second sub-buffer in a first-in-first-out manner.Type: GrantFiled: August 12, 2011Date of Patent: August 26, 2014Assignee: Marvell Israel (M.I.S.L) Ltd.Inventors: Evgeny Shumsky, Jonathan Kushnir
-
Patent number: 8811417Abstract: A Network Interface (NI) includes a host interface, which is configured to receive from a host processor of a node one or more cross-channel work requests that are derived from an operation to be executed by the node. The NI includes a plurality of work queues for carrying out transport channels to one or more peer nodes over a network. The NI further includes control circuitry, which is configured to accept the cross-channel work requests via the host interface, and to execute the cross-channel work requests using the work queues by controlling an advance of at least a given work queue according to an advancing condition, which depends on a completion status of one or more other work queues, so as to carry out the operation.Type: GrantFiled: November 15, 2010Date of Patent: August 19, 2014Assignee: Mellanox Technologies Ltd.Inventors: Noam Bloch, Gil Bloch, Ariel Shachar, Hillel Chapman, Ishai Rabinovitz, Pavel Shamis, Gilad Shainer
-
Patent number: 8812754Abstract: A network relay device includes a packet buffer for temporarily storing a received packet, and a packet buffer control section for changing an effective buffer number depending on the received amount of packet. When a traffic amount is small, the packet buffer control section reduces the power consumption by stopping the feeding of power or the supply of clock to a part of the packet buffers. The network relay device further includes plural table memories storing a table for deciding the transfer destination of packet, and a table memory control section for changing an effective table number according to a required number of table entries. When the required table entry number is small, the table memory control section reduces the power consumption by stopping the feeding of power or the supply of clock to a part of the table memories.Type: GrantFiled: June 11, 2010Date of Patent: August 19, 2014Assignee: Alaxala Networks CorporationInventors: Kentaroh Sugawara, Shinichi Akahane, Hiroki Yano, Yuichi Ishikawa
-
Patent number: 8812771Abstract: A data storage system and a data storing method for the data storage system are provided. The data storage system includes a host unit, a storage unit, and a first input/output bus functioning as an interface between the host unit and the storage unit. The storage unit includes a non-volatile memory buffer unit and a flash memory unit. The non-volatile memory buffer unit includes a plurality of buffers arranged in parallel. The flash memory unit includes a plurality of data storage devices arranged in parallel to input and output data using a parallel method. In the method, a writing request is first classified into one of a plurality of grades according to a writing request frequency when there is a writing request and the writing requested data is stored in one of the non-volatile memory buffer unit and the flash memory unit according to the writing request frequency.Type: GrantFiled: February 12, 2010Date of Patent: August 19, 2014Assignee: Samsung Electronics Co., Ltd.Inventors: Keunsoo Yim, Jeongjoon Yoo, Jungkeun Park
-
Patent number: 8804752Abstract: A method for temporary storage of data units including receiving a first data unit to store in a hardware linked list queue on a communications adapter, reading a first index value from the first data unit, determining that the first index value does match an existing index value of a first linked list, and storing the first data unit in the hardware linked list queue as a member of the first linked list. The method further includes receiving a second data unit, reading a second index value from the second data unit, determining that the second index value does not match any existing index value, allocating space in the hardware linked list queue for a second linked list, and storing the second data unit in the second linked list.Type: GrantFiled: May 31, 2011Date of Patent: August 12, 2014Assignee: Oracle International CorporationInventors: Brian Edward Manula, Magne Vigulf Sandven, Haakon Ording Bugge
-
Patent number: 8806541Abstract: A system for a mobile wireless device to receive and display a video stream while preventing overflow or starvation of its receive buffer by requesting changes to the video streaming or encoding rates and by controlling the video playback frame rate. The current receive buffer level is used to make comparisons with several thresholds, the results of which are used to trigger actions. If the current receive buffer level has risen above a start level, then playback of the video can begin. If the current receive buffer level rises above an early detection threshold, then the video streaming device is requested to slow its streaming rate. If the current receive buffer level rises above a high level threshold, then the video streaming device is requested to stop streaming the video. If the current receive buffer level drops below a low level threshold, then the playback frame rate is slowed.Type: GrantFiled: January 28, 2011Date of Patent: August 12, 2014Assignee: AT&T Mobility II LLCInventors: Jun Shen, Venson Shaw
-
Patent number: 8806071Abstract: A memory device includes a memory array, an output buffer, an initial latency register, and an output signal. Often times a host device that interfaces with the memory device is clocked at high rate such that data extraction rates of the memory device are not adequate to support a gapless data transfer. The output signal is operable to stall a transmission between the memory device and the host device when data extraction rates from the memory array are not adequate to support output rates of the output buffer.Type: GrantFiled: January 25, 2012Date of Patent: August 12, 2014Assignee: Spansion LLCInventor: Clifford Alan Zitlaw
-
Patent number: 8806143Abstract: A method and apparatus for queuing FBNs of received write blocks for a file to a queuing data structure for assigning LBNs to the FBNs is described herein. A queuing data structure may comprise a modified binary search tree, such as a modified red-black search tree. Each node of a queuing data structure may comprise a base field for storing a base FBN and a range field for storing a range value comprising X bits. The range field of a single node may represent a range of two or more FBNs (“FBN range”), the FBN range being based on the base FBN. Each FBN in the FBN range may have a corresponding bit in the range field, the base FBN corresponding to a “base bit” in the range field. The value of the corresponding bit in the range field may indicate whether the FBN has been received.Type: GrantFiled: October 9, 2009Date of Patent: August 12, 2014Assignee: NetApp, Inc.Inventor: Shiow-wen Wendy Cheng
-
Patent number: 8806090Abstract: Memory system controllers can include hardware masters, first buffers, and a switch coupled to the hardware masters and to the first buffers. The switch can include second buffers and a buffer allocation management (BAM) circuit. The BAM circuit can include a buffer tag pool. The buffer tag pool can include tags, each identifying a respective first buffer or a respective second buffer. The BAM circuit can be configured to allocate a tag to a hardware master in response to an allocation request from the hardware masters. The BAM circuit can be configured to prioritize allocation of a tag identifying a second buffer over a tag identifying a first buffer.Type: GrantFiled: May 31, 2011Date of Patent: August 12, 2014Assignee: Micron Technology, Inc.Inventors: Douglas A. Larson, Joseph M. Jeddeloh
-
Patent number: 8806089Abstract: A traffic manager includes an execution unit that is responsive to instructions related to queuing of data in memory. The instructions may be provided by a network processor that is programmed to generate such instructions, depending on the data. Examples of such instructions include (1) writing of data units (of fixed size or variable size) without linking to a queue, (2) re-sequencing of the data units relative to one another without moving the data units in memory, and (3) linking the previously-written data units to a queue. The network processor and traffic manager may be implemented in a single chip.Type: GrantFiled: December 21, 2012Date of Patent: August 12, 2014Assignee: Net Navigation Systems, LLCInventors: Andrew Li, Michael Lau, Asad Khamisy
-
Patent number: 8804507Abstract: A method, apparatus and computer program product for temporal-based flow distribution across multiple packet processors is presented. A packet is received and a hash identifier (ID) is computed for the packet. The hash ID is used to index into a State Table and to retrieve a corresponding record. When a time credit field of the record is zero then the time credit field is set to a to a new value; a Packet Processing Engine (PE) whose First-In-First-Out buffer (FIFO) has the lowest fill level is selected; and a PE number field in the state table record is updated with the selected PE number. When the time credit field of the record is non-zero then the packet is sent to a PE based on the value stored in the record; and the time credit field in the record is decremented if the time credit field is greater than zero.Type: GrantFiled: March 31, 2011Date of Patent: August 12, 2014Assignee: Avaya, Inc.Inventor: Hamid Assarpour
-
Patent number: 8799606Abstract: Computer memory subsystems are disclosed for enhancing signal quality that include: one or more memory modules; a memory bus; and a memory controller connected to the memory modules through the memory bus, the memory controller including a reception buffer connected to the memory bus, the reception buffer capable of receiving an input signal from one of the memory modules, the memory controller including a reception characteristics table capable of storing reception characteristics for each of the memory modules connected to the memory controller, the memory controller including an equalizer connected to the reception buffer and the reception characteristics table, the equalizer capable of equalizing the received input signal in dependence upon the reception characteristics for the memory module from which the input signal was received, and the memory controller including memory controller logic connected to the equalizer, the memory controller logic capable of processing the equalized input signal.Type: GrantFiled: December 20, 2007Date of Patent: August 5, 2014Assignee: International Business Machines CorporationInventors: Justin P. Bandholz, Jonathan R. Hinkle, Devarshi S. Patel, Pravin S. Patel, Kevin M. Reinberg
-
Patent number: 8799536Abstract: An apparatus, in which a plurality of modules is connected with each other and processes a packet having information, includes a storage unit for storing first information indicating an order of processing performed by its own module and second information indicating an order of modules which perform processing, a reception unit for receiving a first packet and transmitting the first packet including information corresponding to the first information, a processing unit for processing data included in the first packet, a generation unit for generating a second packet including the processed data and the second information, and a transmission unit for comparing the information included in the first packet with the second information included in the second packet, and transmitting the packet having a latter processing order.Type: GrantFiled: June 23, 2010Date of Patent: August 5, 2014Assignee: Canon Kabushiki KaishaInventors: Isao Sakamoto, Hisashi Ishikawa
-
Patent number: 8799535Abstract: In one example, multimedia content is requested from a plurality of storage modules. Each storage module retrieves the requested parts, which are typically stored on a plurality of storage devices at each storage module. Each storage module determines independently when to retrieve the requested parts of the data file from storage and transmits those parts from storage to a data queue. Based on a capacity of a delivery module and/or the data rate associated with the request, each storage module transmits the parts of the data file to the delivery module. The delivery module generates a sequenced data segment from the parts of the data file received from the plurality of storage modules and transmits the sequenced data segment to the requester.Type: GrantFiled: January 11, 2008Date of Patent: August 5, 2014Assignee: Akamai Technologies, Inc.Inventors: Michael G. Hluchyj, Santosh Krishnan, Christopher Lawler, Ganesh Pai, Umamaheswar Reddy