Input/output Data Buffering Patents (Class 710/52)
-
Patent number: 11968116Abstract: Methods and systems are provided for performing lossy dropping and ECN marking in a flow-based network. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow are acknowledged after reaching the egress point of the network, and the acknowledgement packets are sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform per-flow packet dropping and ECN marking.Type: GrantFiled: March 23, 2020Date of Patent: April 23, 2024Assignee: Hewlett Packard Enterprise Development LPInventors: Jonathan P. Beecroft, Anthony Michael Ford
-
Patent number: 11967958Abstract: In some embodiments, digital logic components, such as those found in standard cells in integrated circuit devices, are used to synthesize signals with controllable waveforms that result in transmitted signals that meet certain requirements, such as above-threshold high openings and below-threshold over/under-shooting. In some embodiments, driving buffers with logic controls and delay chains are used to achieve controllable slew rates at rising and falling edges to minimize over/under shooting behavior in signals. In some embodiments, control logic and delay chains produce controllable rising/falling “stair-type” edges to obtain optimized damping waveform.Type: GrantFiled: November 30, 2021Date of Patent: April 23, 2024Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD.Inventors: Huan-Neng Chen, Chang-Fen Hu, Shao-Yu Li
-
Patent number: 11962500Abstract: A system includes a storage system and circuitry coupled to the storage system. The circuitry is configured to perform operations comprising determining a type of a received data packet, determining a destination of the received data packet, and determining whether the received data packet is of a particular type or has a particular destination. The operations further comprise, responsive to determining that the received data packet is of the particular type or has the particular destination, rerouting the received data packet from the particular destination to a register of the storage system.Type: GrantFiled: August 29, 2022Date of Patent: April 16, 2024Assignee: Micron Technology, Inc.Inventors: Aleksei Vlasov, Prateek Sharma, Yoav Weinberg, Scheheresade Virani, Bridget L. Mallak
-
Patent number: 11954036Abstract: Embodiments include methods, systems and non-transitory computer-readable computer readable media including instructions for executing a prefetch kernel that includes memory accesses for prefetching data for a processing kernel into a memory, and, subsequent to executing at least a portion of the prefetch kernel, executing the processing kernel where the processing kernel includes accesses to data that is stored into the memory resulting from execution of the prefetch kernel.Type: GrantFiled: November 11, 2022Date of Patent: April 9, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Nuwan S. Jayasena, James Michael O'Connor, Michael Mantor
-
Patent number: 11947796Abstract: The present disclosure includes apparatuses and methods related to a memory protocol. An example apparatus can perform operations on a number of block buffers of the memory device based on commands received from a host using a block configuration register, wherein the operations can read data from the number of block buffers and write data to the number of block buffers on the memory device.Type: GrantFiled: May 20, 2022Date of Patent: April 2, 2024Inventors: Robert M. Walker, James A. Hall, Jr.
-
Patent number: 11947466Abstract: A nonvolatile memory system is disclosed. The nonvolatile memory system includes a host device and a storage device connected to the host device through a physical cable including a power line and a data line. The storage device includes: a nonvolatile memory; a link controller configured to temporarily deactivate the data line while supplying power from the host device through the power line; and a memory controller including a user verification circuit configured to authenticate a user of the storage device and change a state of the memory controller according to a verification result, a relink trigger circuit configured to control the link controller based on the state change of the memory controller, and a data processing circuit configured to encrypt and decrypt data.Type: GrantFiled: January 17, 2023Date of Patent: April 2, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hwasoo Lee, Mingon Shin, Seungjae Lee, Myeongjong Ju
-
Patent number: 11949600Abstract: Networks, systems and methods for dynamically filtering market data are disclosed. Streams of market data may be buffered or stored in a queue when inbound rates exceed distribution or publication limitations. Inclusive messages in the queue may be removed, replaced or aggregated, reducing the number of messages to be published when distribution limitations are no longer exceeded.Type: GrantFiled: January 24, 2022Date of Patent: April 2, 2024Assignee: Chicago Mercantile Exchange Inc.Inventors: Paul J. Callaway, Dennis M. Genetski, Adrien Gracia, James Krause, Vijay Menon
-
Patent number: 11922202Abstract: A data transmission method includes: obtaining information required for performing an acceleration operation in a virtual input/output ring of a target virtual accelerator, where the information required for performing the acceleration operation uses a predefined data structure, and the data structure occupies one entry of the virtual input/output ring of the target virtual accelerator; determining, according to the information required for performing the acceleration operation, information that can be recognized by the hardware accelerator; and sending the information that can be recognized by the hardware accelerator to the hardware accelerator, where the hardware accelerator is configured to obtain to-be-accelerated data according to the information that can be recognized by the hardware accelerator and perform the acceleration operation on the to-be-accelerated data.Type: GrantFiled: November 3, 2021Date of Patent: March 5, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventor: Lei Gong
-
Patent number: 11900084Abstract: In variants, the AI/ML development system can include one or more applications, wherein each application can include: one or more components and one or more state storages. Each application can optionally include one or more event loops, one or more shared storages, and/or one or more time schedulers.Type: GrantFiled: May 1, 2023Date of Patent: February 13, 2024Assignee: Grid.ai, Inc.Inventors: Williams Falcon, Luca Antiga, Thomas Henri Marceau Chaton, Adrian Wälchli
-
Patent number: 11894814Abstract: A method of bidirectional amplification of proprietary TDMA (Time-Division Multiple Access) data modulated signals over CATV infrastructure is described. A method of upstream/downstream switching based on carrier detection/measurement originated from the master and slave modems embodiment is described, along with upstream/downstream direction switching based on the encoded switching command detection, originating from the master modem.Type: GrantFiled: August 3, 2020Date of Patent: February 6, 2024Inventor: Ivan Krivokapic
-
Patent number: 11868652Abstract: Disclosed is a method of allocating a buffer memory to a plurality of data storage zones. In some implementations, the method may include comparing a free buffer space size to a reallocation threshold size that is re-allocable at a reallocation cycle, deallocating, upon a determination that the free buffer space size is smaller than the reallocation threshold size, at least a portion of an occupied buffer space size to create a new free buffer space based on a history of buffer memory utilization of the occupied buffer space, and allocating the existing free buffer space and the new free buffer space to targeted data storage zones based on history of buffer memory utilizations corresponding to the targeted data storage zones.Type: GrantFiled: February 25, 2021Date of Patent: January 9, 2024Assignee: SK HYNIX INC.Inventor: Seong Won Shin
-
Patent number: 11870698Abstract: This application discloses a congestion control method and related apparatus. In the congestion control method, a network device first obtains statistical information of a target egress queue within a first time period, where the target egress queue is any target egress queue in the network device. The network device determines an explicit congestion notification (ECN) threshold for the target egress queue within a second time period based on the statistical information of the target egress queue within the first time period, where the second time period is chronologically subsequent to the first time period. When a queue depth of the target egress queue exceeds the ECN threshold within the second time period, the network device sets an ECN mark for a data packet in the target egress queue.Type: GrantFiled: December 2, 2021Date of Patent: January 9, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Zhigang Ji, Di Qu, Yinben Xia, Siyu Yan
-
Patent number: 11861337Abstract: A method of compiling neural network code to executable instructions for execution by a computational acceleration system having a memory circuit and one or more acceleration circuits having a maps data buffer and a kernel data buffer is disclosed, such as for execution by an inference engine circuit architecture which includes a matrix-matrix (MM) accelerator circuit having multiple operating modes to provide a complete matrix multiplication. A representative compiling method includes generating a list of neural network layer model objects; fusing available functions and layers in the list; selecting a cooperative mode, an independent mode, or a combined cooperative and independent mode for execution; selecting a data movement mode and an ordering of computations which reduces usage of the memory circuit; generating an ordered sequence of load objects, compute objects, and store objects; and converting the ordered sequence of load objects, compute objects, and store objects into the executable instructions.Type: GrantFiled: August 26, 2020Date of Patent: January 2, 2024Assignee: Micron Technology, Inc.Inventors: Andre Xian Ming Chang, Aliasger Zaidy, Eugenio Culurciello, Marko Vitez
-
Patent number: 11854630Abstract: A storage device is provided which shares a host memory with a host. The storage device includes an interface that exchanges data with the host and implements a protocol to use a partial area of the host memory as a buffer of the storage device. A storage controller of the storage device monitors deterioration information of a first area of the buffer and transmits a corruption prediction notification associated with the first area to the host based on a result of the monitoring.Type: GrantFiled: September 26, 2022Date of Patent: December 26, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Dong-Ryoul Lee, Hyun Ju Yi, Jaeho Sim, Kicheol Eom, Hyotaek Leem
-
Patent number: 11841819Abstract: Provided are a Peripheral Component Interconnect Express (PCIe) interface device and a method of operating the same. The PCIe interface device includes a first buffer, a second buffer, and a buffer controller. The first buffer may be configured to store a plurality of first transaction layer packets received from multiple functions. The second buffer may be configured to store a plurality of second transaction layer packets received from the multiple functions. The buffer controller may be configured to, when a first buffer of a switch is full, realign an order in which the plurality of second transaction layer packets are to be output from the second buffer to the switch, based on IDs of the plurality of second transaction layer packets.Type: GrantFiled: September 3, 2021Date of Patent: December 12, 2023Assignee: SK hynix Inc.Inventor: Yong Tae Jeon
-
Patent number: 11838253Abstract: The techniques disclosed herein provide dynamic permissions for controlling the display of messages directed to a presenter of a communication system. For example, during a presentation of an online meeting, a system may selectively permit private messages to be sent to a presenter from designated participants. The private messages sent from the designated participants are displayed to the presenter in a manner that does not allow the other participants to see the messages. For instance, if the presenter is sharing a screen from a computer, the system can determine a set of permitted users allowed to send messages to the presenter. The system configures permissions to cause the messages to be displayed in a manner that allows the presenter to view the messages along with their presentation content, while filtering pixels of the messages on the display of non-permitted users.Type: GrantFiled: July 16, 2022Date of Patent: December 5, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Bahram Ali, Fehmi Chebil
-
Patent number: 11836549Abstract: Computer-implemented techniques for fast block-based parallel message passing interface (MPI) transpose are disclosed. The techniques achieve an in-place parallel matrix transpose of an input matrix in a distributed-memory multiprocessor environment with reduced consumption of computer processing time and storage media resources. An in-memory copy of the input matrix or a submatrix thereof to use as the send buffer for MPI send operations is not needed. Instead, by dividing the input matrix in-place into data blocks having up to at most a predetermined size and sending the corresponding data block(s) for a given submatrix using an MPI API before receiving any data block(s) for the given submatrix using an MPI API in the place of the sent data block(s), making the in-memory copy to use a send buffer can be avoided and yet the input matrix can be transposed in-place.Type: GrantFiled: October 15, 2020Date of Patent: December 5, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Samantray Biplab Raut
-
Patent number: 11805066Abstract: A scheduler in a network device serves ports with data units from a plurality of queues. The scheduler implements a scheduling algorithm that is normally constrained to releasing data to a port no more frequently than at a default maximum service rate. However, when data units smaller than a certain size are at the heads of one or more data unit queues assigned to a port, the scheduler may temporarily increase the maximum service rate of that port. The increased service rate permits fuller realization of a port's maximum bandwidth when handling smaller data units. In some embodiments, increasing the service rate involves dequeuing more than one small data unit at a time, with the extra data units temporarily stored in a port FIFO. The scheduler adds a pseudo-port to its scheduling sequence to schedule release of data from the port FIFO, with otherwise minimal impact on the scheduling logic.Type: GrantFiled: January 4, 2021Date of Patent: October 31, 2023Assignee: Innovium, Inc.Inventors: Ajit Kumar Jain, Ashwin Alapati
-
Patent number: 11792624Abstract: Apparatuses, systems, and methods related to accessing a memory resource at one or more physically remote entities are described. A system accessing a memory resource at one or more physically remote entities may enable performance of functions, including automated functions critical for prevention of damage to a product, personnel safety, and/or reliable operation, based on increased access to data that may improve performance of a mission profile.Type: GrantFiled: December 6, 2021Date of Patent: October 17, 2023Inventor: Aaron P. Boehm
-
Patent number: 11782871Abstract: In one implementation a vector processor unit having preload registers for at least some of vector length, vector constant, vector address, and vector stride. Each preload register has an input and an output. All the preload register inputs are coupled to receive a new vector parameters. Each of the preload registers' outputs are coupled to a first input of a respective multiplexor, and the second input of all the respective multiplexors are coupled to the new vector parameters.Type: GrantFiled: March 22, 2022Date of Patent: October 10, 2023Assignee: Microchip Technology Inc.Inventor: Christopher I. W. Norrie
-
Patent number: 11750334Abstract: The data collection management device (10) is connected via a network to a plurality of communication devices (20) performing cyclic communication and includes: a network configuration storage (17) to store network configuration information indicating the communication devices participating in the cyclic communication; a data receiving unit (11) to receive communication data multicast from each communication device (20); a received data storage (12) to store the received communication data as collected data; a received data determination unit (13) to determine whether there is missing data in the collected data and identify unreceived communication data, based on information specifying communication cycles included in the collected data, on information specifying sender communication devices included in the collected data, and on network configuration information; and a retransmission requesting unit (15) to transmit a retransmission request of the unreceived communication data to one of the plurality of commType: GrantFiled: December 25, 2019Date of Patent: September 5, 2023Assignee: MITSUBISHI ELECTRIC CORPORATIONInventor: Yuki Nakano
-
Patent number: 11743312Abstract: A transmission method includes: generating a stream including a plurality of Internet Protocol (IP) data flows corresponding one-to-one with a plurality of services in broadcast, the IP data flows storing the corresponding services of the plurality of services; and transmitting the generated stream in a predetermined channel.Type: GrantFiled: February 21, 2017Date of Patent: August 29, 2023Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Noritaka Iguchi, Tadamasa Toma
-
Patent number: 11734221Abstract: An embodiment processing system comprises a queued SPI circuit, which comprises a hardware SPI communication interface, an arbiter and a plurality of interface circuits. Each interface circuit comprises a transmission FIFO memory, a reception FIFO memory and an interface control circuit. The interface control circuit is configured to receive first data packets and store them to the transmission FIFO memory. The interface control circuit sequentially reads the first data packets from the transmission FIFO memory, extracts at least one transmission data word, and provides the extracted word to the arbiter. The interface control circuit receives from the arbiter a reception data word and stores second data packets comprising the received reception data word to the reception FIFO memory. The interface control circuit sequentially reads the second data packets from the reception FIFO memory and transmits them to the digital processing circuit.Type: GrantFiled: March 11, 2021Date of Patent: August 22, 2023Assignees: STMICROELECTRONICS APPLICATION GMBH, STMICROELECTRONICS DESIGN AND APPLICATION S.R.O.Inventors: Rolf Nandlinger, Radek Olexa
-
Patent number: 11734224Abstract: Methods and systems for executing an application data flow graph on a set of computational nodes are disclosed. The computational nodes can each include a programmable controller from a set of programmable controllers, a memory from a set of memories, a network interface unit from a set of network interface units, and an endpoint from a set of endpoints. A disclosed method comprises configuring the programmable controllers with instructions. The method also comprises independently and asynchronously executing the instructions using the set of programmable controllers in response to a set of events exchanged between the programmable controllers themselves, between the programmable controllers and the network interface units, and between the programmable controllers and the set of endpoints. The method also comprises transitioning data in the set of memories on the computational nodes in accordance with the application data flow graph and in response to the execution of the instructions.Type: GrantFiled: September 28, 2020Date of Patent: August 22, 2023Assignee: Tenstorrent Inc.Inventors: Ivan Matosevic, Davor Capalija, Jasmina Vasiljevic, Utku Aydonat, S. Alexander Chin, Djordje Maksimovic, Ljubisa Bajic
-
Patent number: 11704302Abstract: A method in job processing server of processing database updates includes: storing, at a job processing server, a job queue including a plurality of job records, each job record having corresponding job parameters; detecting job initiation data at a data source; responsive to detecting the job initiation data, retrieving new job parameters from the data source based on the job initiation data; creating a new job record including the new job parameters in the job queue; and responsive to a predefined trigger, for each job in the job queue, processing the job based on corresponding job parameters, wherein processing the job includes sending instructions for execution by a second server, the instructions for performing an update at the second server.Type: GrantFiled: August 8, 2018Date of Patent: July 18, 2023Assignee: PERRY + CURRIER INC.Inventors: Christina S. Lee, Robert Cotran, Robert Shek, Thomas Andrew Currier
-
Patent number: 11700004Abstract: A bi-directional Gray code counter includes a first set of logic circuitry configured to receive an input having a first sequence of bits representing a first value. The first set of logic circuitry is further configured to convert the first sequence of bits to a second sequence of bits representing the first value. The bi-directional Gray code counter further includes a second set of logic circuitry and third second set of logic circuitry. The second set of logic circuitry is configured to compare the second sequence of bits to a bit index pattern. The third set of logic circuitry is configured to transition one bit in the first sequence of bits from a first state to a second state to form a third sequence of bits representing a second value. The one bit is transitioned in response to the second sequence of bits being compared to the bit index pattern.Type: GrantFiled: January 28, 2022Date of Patent: July 11, 2023Assignee: ADVANCED MICRO DEVICES (SHANGHAI) CO., LTD.Inventor: HaiFeng Zhou
-
Patent number: 11681615Abstract: A method of managing a garbage collection (GC) operation includes: comprising: selecting a source block and at least one candidate source block from the flash memory; calculating an overall valid page percentage according to a number of valid pages in the source block and the at least one candidate source block; determining a GC-to-host base ratio according to the overall valid page percentage; and performing the GC operation on the source block according to at least the GC-to-host base ratio.Type: GrantFiled: March 19, 2021Date of Patent: June 20, 2023Assignee: Silicon Motion, Inc.Inventor: Tzu-Yi Yang
-
Patent number: 11669272Abstract: A memory sub-system configured to predictively schedule the transfer of data to reduce idle time and the amount and time of data being buffered in the memory sub-system. For example, write commands received from a host system can be queued without buffering the data of the write commands at the same time. When executing a first write command using a media unit, the memory sub-system can predict a duration to a time the media unit becoming available for execution of a second write command. The communication of the data of the second command from the host system to a local buffer memory of the memory sub-system can be postponed and initiated according to the predicted duration. After the execution of the first write command, the second write command can be executed by the media unit without idling to store the data from the local buffer memory.Type: GrantFiled: May 4, 2020Date of Patent: June 6, 2023Assignee: Micron Technology, Inc.Inventors: Sanjay Subbarao, Steven S. Williams, Mark Ish
-
Patent number: 11646751Abstract: Apparatuses, systems, and methods for multi-bit error detection. A memory device may store data bits and parity bits in a memory array. An error correction code (ECC) circuit may generate syndrome bits based on the data and parity bits and use the syndrome bits to correct up to a single bit error in the data and parity bits. A multi-bit error (MBE) detection circuit may detect an MBE in the data and parity based on at least one of the syndrome bits or the parity bits. For example, the MBE detection circuit may determine if the syndrome bits have a mapped or unmapped state and/or may compare the parity bits, data bits, and an additional parity bit to determine if there is an MBE. When an MBE is detected an MBE signal is activated. In some embodiments, an MBE flag may be set based on the MBE signal being active.Type: GrantFiled: June 15, 2021Date of Patent: May 9, 2023Assignee: Micron Technology, Inc.Inventors: Markus H. Geiger, Matthew A. Prather, Sujeet Ayyapureddi, C. Omar Benitez, Dennis Montierth
-
Patent number: 11620377Abstract: A physically-tagged data cache memory mitigates side channel attacks by using a translation context (TC). With each entry allocation, control logic uses the received TC to perform the allocation, and with each access uses the received TC in a hit determination. The TC includes an address space identifier (ASID), virtual machine identifier (VMID), a privilege mode (PM) or translation regime (TR), or combination thereof. The TC is included in a tag of the allocated entry. Alternatively, or additionally, the TC is included in the set index to select a set of entries of the cache memory. Also, the TC may be hashed with address index bits to generate a small tag also included in the allocated entry used to generate an access early miss indication and way select.Type: GrantFiled: August 27, 2020Date of Patent: April 4, 2023Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 11614889Abstract: An operation combiner receives a series of commands with read addresses, a modification operation, and write addresses. In some cases, the commands have serial dependencies that limit the rate at which they can be processed. The operation combiner compares the addresses for compatibility, transforms the operations to break serial dependencies, and combines multiple source commands into a smaller number of aggregate commands that can be executed much faster than the source commands. Some embodiments of the operation combiner receive a first command including one or more first read addresses and a first write address. The operation combiner compares the first read addresses and the first write address to one or more second read addresses and a second write address of a second command stored in a buffer. The operation combiner selectively combines the first and second commands to form an aggregate command based on the comparison.Type: GrantFiled: November 29, 2018Date of Patent: March 28, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Christopher J. Brennan
-
Patent number: 11609821Abstract: A fault recovery system including a fault controller is disclosed. The fault controller is coupled between a processor and an interconnect, and configured to receive a time-out signal that is indicative of a failure of the processor to execute a transaction after a fault is detected in the processor. The failure in the execution of the transaction results in queuing of the interconnect. Based on the time-out signal, the fault controller is further configured to generate and transmit a control signal to the processor to disconnect the processor from the interconnect. Further, the fault controller is configured to execute the transaction, and in turn, dequeue the interconnect. When the transaction is successfully executed, the fault controller is further configured to generate a status signal to reset the processor, thereby managing a fault recovery of the processor.Type: GrantFiled: September 30, 2020Date of Patent: March 21, 2023Assignee: NXP USA, Inc.Inventors: Ankur Behl, Neha Srivastava
-
Patent number: 11609868Abstract: One example system for preventing data loss during memory blackout events comprises a memory device, a sensor, and a controller operably coupled to the memory device and the sensor. The controller is configured to perform one or more operations that coordinate at least one memory blackout event of the memory device and at least one data transmission of the sensor.Type: GrantFiled: December 31, 2020Date of Patent: March 21, 2023Assignee: Waymo LLCInventors: Sabareeshkumar Ravikumar, Daniel Rosenband
-
Patent number: 11604751Abstract: Embodiments herein describe techniques for preventing a stall when transmitting data between a producer and a consumer in the same integrated circuit (IC). A stall can occur when there is a split point and a convergence point between the producer and consumer. To prevent the stall, the embodiments herein adjust the latencies of one of the paths (or both paths) such that a maximum latency of the shorter path is greater than, or equal to, the minimum latency of the longer path. When this condition is met, this means the shortest path has sufficient buffers (e.g., a sufficient number of FIFOs and registers) to queue/store packets along its length so that a packet can travel along the longer path and reach the convergence point before the buffers in the shortest path are completely full (or just become completely full).Type: GrantFiled: May 10, 2021Date of Patent: March 14, 2023Assignee: XILINX, INC.Inventors: Brian Guttag, Nitin Deshmukh, Sreesan Venkatakrishnan, Satish Sivaswamy
-
Patent number: 11606317Abstract: Sharing integrated circuit (IC) resources can include receiving, within a communication endpoint of an IC, a plurality of packets from a plurality of different source virtual entities, determining packet handling data for each packet of the plurality of packets using an acceleration function table stored within the IC, routing each packet of the plurality of packets to one or more selected function circuit blocks of a plurality of function circuit blocks in the IC based on the packet handling data of each respective packet, and processing the plurality of packets using the one more selected function circuit blocks generating a plurality of results corresponding to respective ones of the plurality of packets. The plurality of results are queued within the communication endpoint. Each result is queued based on the packet handling data of the corresponding packet.Type: GrantFiled: April 14, 2021Date of Patent: March 14, 2023Assignee: Xilinx, Inc.Inventors: Seong Hwan Kim, Zhiyi Sun, Robert Earl Nertney
-
Patent number: 11586559Abstract: A nonvolatile memory system is disclosed. The nonvolatile memory system includes a host device and a storage device connected to the host device through a physical cable including a power line and a data line. The storage device includes: a nonvolatile memory; a link controller configured to temporarily deactivate the data line while supplying power from the host device through the power line; and a memory controller including a user verification circuit configured to authenticate a user of the storage device and change a state of the memory controller according to a verification result, a relink trigger circuit configured to control the link controller based on the state change of the memory controller, and a data processing circuit configured to encrypt and decrypt data.Type: GrantFiled: September 21, 2020Date of Patent: February 21, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hwasoo Lee, Mingon Shin, Seungjae Lee, Myeongjong Ju
-
Patent number: 11573915Abstract: A method of operating a storage device includes receiving, from a host, a first packet containing a buffer address indicating a location of a data buffer selected from among a plurality of data buffers in the host, parsing the buffer address from the first packet, and transmitting a second packet containing the buffer address to the host in response to the first packet.Type: GrantFiled: June 28, 2021Date of Patent: February 7, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Young-Min Lee, Sung-Ho Seo, Hwa-Seok Oh, Kyung-Phil Yoo, Seong-Yong Jang
-
Patent number: 11575616Abstract: A packet forwarding device and a queue management method are provided. The queue management method is applicable to a plurality of priority queues each associated with a different transmission priority. The queue management method includes: allocating at least one buffer from a free buffer pool to each of the priority queues; monitoring a number of dropped packets of an observation queue of the priority queues; and increasing a number of buffers for the observation queue and decreasing a number of buffers for at least one of the priority queues which has a lower transmission priority than the observation queue, according to the number of dropped packets.Type: GrantFiled: April 26, 2021Date of Patent: February 7, 2023Assignee: REALTEK SINGAPORE PTE LTD.Inventors: Donggun Keung, Charles Chen
-
Patent number: 11567771Abstract: A system for processing gather and scatter instructions can implement a front-end subsystem, a back-end subsystem, or both. The front-end subsystem includes a prediction unit configured to determine a predicted quantity of coalesced memory access operations required by an instruction. A decode unit converts the instruction into a plurality of access operations based on the predicted quantity, and transmits the plurality of access operations and an indication of the predicted quantity to an issue queue. The back-end subsystem includes a load-store unit that receives a plurality of access operations corresponding to an instruction, determines a subset of the plurality of access operations that can be coalesced, and forms a coalesced memory access operation from the subset. A queue stores multiple memory addresses for a given load-store entry to provide for execution of coalesced memory accesses.Type: GrantFiled: July 30, 2020Date of Patent: January 31, 2023Assignees: Marvell Asia PTE, LTD., Cray Inc.Inventors: Harold Wade Cain, III, Nagesh Bangalore Lakshminarayana, Daniel Jonathan Ernst, Sanyam Mehta
-
Patent number: 11567934Abstract: An approach for implementing function semantic based partition-wise SQL execution and partition pruning in a data processing system is provided. The system receives a query directed to a range-partitioned table and determines if operation key(s) of the query include function(s) over the table partitioning key(s). If so, the system obtains a set of values corresponding to each partition by evaluating the function(s) on a low bound and/or a high bound table partitioning key value corresponding to the partition. The system may then compare the sets of values corresponding to different partitions and determine whether to aggregate results obtained by executing the query over the partitions based on the comparison. The system may also determine whether to prune any partitions from processing based on a set of correlations between the set of values for each partition and predicate(s) of the query including function(s) over the table partitioning key(s).Type: GrantFiled: April 20, 2018Date of Patent: January 31, 2023Inventors: Mehul Bastawala, Atrayee Mullick, George Eadon, Ramesh Kumar
-
Patent number: 11567697Abstract: A method, computer program product and computer system are provided. A processor receives a host input/output write operation, wherein the host input/output write operation includes host metadata regarding data represented by the host input/output write operation. The processor stores the host input/output write operation in one or more physical storage data units. A processor assigns a priority to the one or more physical storage data units. In response to receiving a host volume delete command associated with at least one of the one or more physical storage data units, a processor prioritizes data units of the host volume for deletion based, at least in part, on the assigned priority of the data units of the host volume, wherein data units with a lower priority are permanently deleted before data units with a higher priority.Type: GrantFiled: November 12, 2020Date of Patent: January 31, 2023Assignee: International Business Machines CorporationInventors: Ben Sasson, Paul Nicholas Cashman, Gemma Izen
-
Patent number: 11567884Abstract: Systems and methods are disclosed for efficient management of bus bandwidth among multiple drivers. An example method may comprise: receiving a request from a driver to write data via a bus; reading contents of a random access memory (RAM) at a specified interval of time to determine whether the data written by the driver is accumulated in the RAM; responsive to determining that the data written by the driver is accumulated in the RAM, determining whether a bandwidth of the bus satisfies a bandwidth condition; and responsive to determining that the bandwidth satisfies the bandwidth condition, forwarding, via the bus, a portion of the data written by the driver in the RAM to a device memory of a device.Type: GrantFiled: July 26, 2021Date of Patent: January 31, 2023Assignee: Red Hat, Inc.Inventor: Michael Tsirkin
-
Patent number: 11570127Abstract: An ingress packet processor in a device corresponds to a group of ports and receives network packets from ports in its port group. A traffic manager in the device manages buffers storing packet data for transmission to egress packet processors. An ingress arbiter is associated with a port group and connects the port group to an ingress packet processor coupled to the ingress arbiter. The ingress arbiter determines a traffic rate at which the associated ingress packet processor transmits packets to the traffic manager. The ingress arbiter controls an associated traffic shaper to generate a number of tokens that are assigned to the port group. Upon receiving packet data from a port in the group, the ingress arbiter determines, using information from the traffic shaper, whether a token is available. Conditioned on determining that a token is available, the ingress arbiter forwards the packet data to the ingress packet processor.Type: GrantFiled: November 4, 2021Date of Patent: January 31, 2023Assignee: Innovium, Inc.Inventors: William Brad Matthews, Puneet Agarwal
-
Patent number: 11561694Abstract: An arithmetic processor includes a memory access controller configured to control access of a memory based on a memory access request. The memory access controller includes a shift register configured to shift a resource number and a memory access request from a first stage to a subsequent stage of the first stage at a timing according to the operation mode, the first stage is received a resource number and a memory access request. The memory access controller includes a plurality of memory access transmitting circuits configured to receive the resource number and the memory access request held by the plurality of stage. Each of the plurality of access transmitting circuits provided corresponding to the plurality of resource number, and output, to the memory, an access command corresponding to the memory access request when the received resource number matches a resource number of a memory access transmitting circuit.Type: GrantFiled: April 27, 2021Date of Patent: January 24, 2023Assignee: FUJITSU LIMITEDInventors: Yuji Kondo, Naozumi Aoki
-
Patent number: 11556303Abstract: A digital signal processing device includes a control unit that performs control to alternately burst transfer burst length audio data in a first half area of a first buffer memory and burst length audio data in a second half area of the first buffer memory to a DRAM, in which the control unit performs control to burst transfer the burst length audio data in the first half area of the first buffer memory to the DRAM while writing audio data one word at a time to the second half area of the first buffer memory in sequence and performs control to burst transfer the burst length audio data in the second half area of the first buffer memory to the DRAM while writing audio data one word at a time to the first half area of the first buffer memory in sequence.Type: GrantFiled: July 14, 2021Date of Patent: January 17, 2023Assignee: KABUSHIKI KAISHA KAWAI GAKKI SEISAKUSHOInventor: Seiji Okamoto
-
Patent number: 11550733Abstract: Disclosed are methods, systems and devices for storing states in a memory in support of applications residing in a trusted execution environment (TEE). In an implementation, one or more memory devices accessible by a memory controller may be shared between and/or among processes in an untrusted execution environment (UEE) and a TEE.Type: GrantFiled: July 1, 2020Date of Patent: January 10, 2023Assignee: Arm LimitedInventors: Richard Andrew Paterson, Rainer Herberholz, Peter Andrew Rees Williams, Oded Golombek, Einat Luko
-
Patent number: 11537312Abstract: An apparatus comprises a source system comprising a distribution layer, a management component and a plurality of replication components. The distribution layer is configured to obtain an input-output operation corresponding to an address and to identify a given replication component that corresponds to the address based at least in part on a distribution instance. The distribution layer is configured to assign a first distribution identifier corresponding to the distribution instance to the input-output operation and to provide the input-output operation to the given replication component with the first distribution identifier. The given replication component is configured to obtain a second distribution identifier from the management component and to determine whether or not the first distribution identifier is equal to the second distribution identifier.Type: GrantFiled: May 5, 2021Date of Patent: December 27, 2022Assignee: EMC IP Holding Company LLCInventors: Adi Bar Shalom, Zeev Shusterman, Lior Zilpa, German Goft, Oren Ashkenazi
-
Patent number: 11537453Abstract: Methods and systems for managing a circular queue, or ring buffer, are disclosed. One method includes storing data from a producer into the ring buffer, and receiving a data read request from a consumer from among a plurality of consumers subscribed to read data from the ring buffer. After obtaining data from a location in the ring buffer in response to the data read request, it is determined if the location has been overrun by the producer. If it is determined that the location has been overrun by the producer, the data is discarded by the consumer. Otherwise, the data is consumed. Depending on the outcome, a miss counter or a read counter may be incremented.Type: GrantFiled: March 30, 2020Date of Patent: December 27, 2022Assignee: Target Brands, Inc.Inventors: Luis F. Stevens, Hrishikesh V. Prabhune, Christopher Fretz
-
Patent number: 11525712Abstract: A method for transferring data blocks from a field device to a server, each data block including data describing an operation of the field device during a block time period is provided. The method includes setting a first and a second pointer delimiting a completed time period; and, until a predetermined transfer period elapses: transferring the data blocks having a block time period that is later than the second pointer to the server in a chronological order; and if all data blocks having a block time period that is later than the second pointer have been transferred to the server, transferring the data blocks having a block time period that is earlier than the first pointer to the server in an anti-chronological order. Data blocks can efficiently and reliably be transferred to the server.Type: GrantFiled: May 14, 2020Date of Patent: December 13, 2022Assignee: SIEMENS GAMESA RENEWABLE ENERGY A/SInventors: Thomas Albrink, Henrik Dan Schierning Holme
-
Patent number: 11526415Abstract: Systems and methods herein describe receiving identification from a data pipeline, accessing first data offset information for a first data origin and second data offset information for a second data origin, bisecting the first data origin using the first data offset information, processing the data pipeline with the bisected first data offset information and the second data offset information, receiving a notification indicating a data pipeline status, and causing presentation of the notification on a graphical user interface of a computing device.Type: GrantFiled: April 22, 2020Date of Patent: December 13, 2022Assignee: StreamSets, Inc.Inventor: Hari Shreedharan