Patents by Inventor Ramon ZUNIGA
Ramon ZUNIGA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11544199Abstract: A cache memory is disclosed. The cache memory includes an instruction memory portion having a plurality of instruction memory locations configured to store instruction data encoding a plurality of CPU instructions. The cache memory also includes a tag memory portion having a plurality of tag memory locations configured to store tag data encoding a plurality of RAM memory address ranges the CPU instructions are stored in. The instruction memory portion includes a single memory circuit having an instruction memory array and a plurality of instruction peripheral circuits communicatively connected with the instruction memory array. The tag memory portion includes a plurality of tag memory circuits, where each of the tag memory circuits includes a tag memory array, and a plurality of tag peripheral circuits communicatively connected with the tag memory array.Type: GrantFiled: October 12, 2021Date of Patent: January 3, 2023Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.Inventors: Bassam S. Kamand, Waleed Younis, Ramon Zuniga, Jagadish Rongali
-
Publication number: 20220066942Abstract: A cache memory is disclosed. The cache memory includes an instruction memory portion having a plurality of instruction memory locations configured to store instruction data encoding a plurality of CPU instructions. The cache memory also includes a tag memory portion having a plurality of tag memory locations configured to store tag data encoding a plurality of RAM memory address ranges the CPU instructions are stored in. The instruction memory portion includes a single memory circuit having an instruction memory array and a plurality of instruction peripheral circuits communicatively connected with the instruction memory array. The tag memory portion includes a plurality of tag memory circuits, where each of the tag memory circuits includes a tag memory array, and a plurality of tag peripheral circuits communicatively connected with the tag memory array.Type: ApplicationFiled: October 12, 2021Publication date: March 3, 2022Inventors: Bassam S. Kamand, Waleed Younis, Ramon Zuniga, Jagadish Rongali
-
Patent number: 11176051Abstract: A cache memory is disclosed. The cache memory includes an instruction memory portion having a plurality of instruction memory locations configured to store instruction data encoding a plurality of CPU instructions. The cache memory also includes a tag memory portion having a plurality of tag memory locations configured to store tag data encoding a plurality of RAM memory address ranges the CPU instructions are stored in. The instruction memory portion includes a single memory circuit having an instruction memory array and a plurality of instruction peripheral circuits communicatively connected with the instruction memory array. The tag memory portion includes a plurality of tag memory circuits, where each of the tag memory circuits includes a tag memory array, and a plurality of tag peripheral circuits communicatively connected with the tag memory array.Type: GrantFiled: March 13, 2020Date of Patent: November 16, 2021Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.Inventors: Bassam S. Kamand, Waleed Younis, Ramon Zuniga, Jagadish Rongali
-
Publication number: 20210286732Abstract: A cache memory is disclosed. The cache memory includes an instruction memory portion having a plurality of instruction memory locations configured to store instruction data encoding a plurality of CPU instructions. The cache memory also includes a tag memory portion having a plurality of tag memory locations configured to store tag data encoding a plurality of RAM memory address ranges the CPU instructions are stored in. The instruction memory portion includes a single memory circuit having an instruction memory array and a plurality of instruction peripheral circuits communicatively connected with the instruction memory array. The tag memory portion includes a plurality of tag memory circuits, where each of the tag memory circuits includes a tag memory array, and a plurality of tag peripheral circuits communicatively connected with the tag memory array.Type: ApplicationFiled: March 13, 2020Publication date: September 16, 2021Inventors: Bassam S. Kamand, Waleed Younis, Ramon Zuniga, Jagadish Rongali
-
Patent number: 10866612Abstract: A clock generation circuit is disclosed. The clock generation circuit includes a logic gate configured to, in response to a control input receiving a first control signal, generate an output clock based on a first input clock received by a first identified clock input. The logic gate is further configured to, in response to the control input receiving the second control signal, generate the output clock based on a fixed logic level. The logic gate is further configured to, in response to the control input receiving the second control signal, generate the output clock based on the second input clock.Type: GrantFiled: June 23, 2020Date of Patent: December 15, 2020Assignee: Goodix Technology Inc.Inventors: Bassam S. Kamand, Ramon Zuniga, Perry Virjee
-
Patent number: 10739813Abstract: A clock generation circuit is disclosed. The clock generation circuit includes a logic gate configured to, in response to a control input receiving a first control signal, generate an output clock based on a first input clock received by a first identified clock input. The logic gate is further configured to, in response to the control input receiving the second control signal, generate the output clock based on a fixed logic level. The logic gate is further configured to, in response to the control input receiving the second control signal, generate the output clock based on the second input clock.Type: GrantFiled: March 13, 2020Date of Patent: August 11, 2020Assignee: GOODIX TECHNOLOGY INC.Inventors: Bassam S. Kamand, Ramon Zuniga, Perry Virjee
-
Patent number: 10203911Abstract: A multi-processor system with a portion of content-addressable memory (CAM) configured as a tuple space to control data flow between processing element. A writing processor may write to a tuple space followed by a reading processor reading from the tuple space. However the system may control access to the tuple space so that no read operations may be performed for a particular tuple space before that space is written to. Further, no write operations may be performed to the tuple space prior to previous written data being read from the tuple space. A processor wishing to use the tuple space before being permitted to do so may be stalled, thus controlling data flow between operating processors.Type: GrantFiled: May 18, 2016Date of Patent: February 12, 2019Assignee: Friday Harbor LLCInventors: Ricardo Jorge Lopez, Ramon Zuniga, Robert Nicholas Hilton
-
Patent number: 10078606Abstract: A multiprocessor architecture utilizing direct memory access (DMA) processors that execute programmed code to feed data to one or more processor cores in advance of those cores requesting data. Stalls of the processor cores are minimized by continually feeding new data directly into the data registers within the cores. When different data is needed, the processor cores can redirect a DMA processor to execute a different feeder program, or to jump to a different point in the feeder program it is already executing. The DMA processors can also feed executable instructions into the instruction pipelines of the processor cores, allowing the feeder program to orchestrate overall processor operations.Type: GrantFiled: November 30, 2015Date of Patent: September 18, 2018Assignee: KnuEdge, Inc.Inventors: Douglas A. Palmer, Jerome Vincent Coffin, Andrew Jonathan White, Ramon Zuniga
-
Publication number: 20180088904Abstract: A semiconductor chip with a first processing element, a state machine, a first read first-in first-out (FIFO) memory component, and a second read FIFO memory component. The state machine receives a request from the first processing element for a first value from the first read FIFO memory component and a second value from the second read FIFO memory component. The first processing element may change from an active state to a second state after submitting the read request. The state machine may determine if the first and the second FIFO memory components have data. The first processing element changes back to the active state after the state machine transfers the first and second values to registers.Type: ApplicationFiled: September 26, 2016Publication date: March 29, 2018Applicant: KNUEDGE, INC.Inventors: Ricardo Jorge Lopez, Ramon Zuniga, Robert Nicholas Hilton, Don Yokota
-
Patent number: 9880784Abstract: In a computing system where an incoming packet can be written directly into one or more local registers of a processing unit, a packet interface routes packets arriving at a computing system to the local registers of the processing unit or to a memory shared by multiple processing units. The shared memory includes a portion configured as a first-in, first-out (FIFO) buffer for storing packets arriving for the processing unit when its local registers are full. The stored packets are then delivered to the processing unit's one or more registers when the registers become available.Type: GrantFiled: February 5, 2016Date of Patent: January 30, 2018Assignee: KnuEdge IncorporatedInventors: Ramon Zuniga, Douglas A. Palmer
-
Patent number: 9858242Abstract: Systems and methods may be provided to support memory access by packet communication and/or direct memory access. In one aspect, a memory controller may be provided for a processing device containing a plurality of computing resources. The memory controller may comprise a first interface to be coupled to a router. The first interface may be configured to transmit and receive packets. Each packet may comprise a header that may contain a routable address and a packet opcode specifying an operation to be performed in accordance with a network protocol. The memory controller may further comprise a memory bus port coupled to a plurality of memory slots that are configured to receive memory banks to form a memory and a controller core coupled to the first interface. The controller core may be configured to decode a packet received at the first interface and perform an operation specified in the received packet.Type: GrantFiled: December 6, 2016Date of Patent: January 2, 2018Assignee: KnuEdge IncorporatedInventors: Douglas A. Palmer, Ramon Zuniga
-
Publication number: 20170337295Abstract: A multi-processor system with a portion of content-addressable memory (CAM) configured as a tuple space to control data flow between processing element. A writing processor may write to a tuple space followed by a reading processor reading from the tuple space. However the system may control access to the tuple space so that no read operations may be performed for a particular tuple space before that space is written to. Further, no write operations may be performed to the tuple space prior to previous written data being read from the tuple space. A processor wishing to use the tuple space before being permitted to do so may be stalled, thus controlling data flow between operating processors.Type: ApplicationFiled: May 18, 2016Publication date: November 23, 2017Inventors: Ricardo Jorge Lopez, Ramon Zuniga, Robert Nicholas Hilton
-
Publication number: 20170302430Abstract: A method is performed by a data transmitting computing resource operating in a first clock domain of a computing system to transfer data to a data receiving computing resource operating in a second clock domain of the computing system different from the first clock domain. The method includes placing data on a parallel data channel including a plurality of data lines connecting the data transmitting computing resource and the data receiving computing resource; waiting a predetermined amount of time after the placing of the data on the parallel data channel, the predetermined amount of time based on different propagation times of the plurality of data lines; and, after waiting the predetermined amount of time, notifying the data receiving computing resource that the data placed on the parallel data channel are valid.Type: ApplicationFiled: April 13, 2016Publication date: October 19, 2017Inventor: Ramon Zuniga
-
Publication number: 20170228194Abstract: In a computing system where an incoming packet can be written directly into one or more local registers of a processing unit, a packet interface routes packets arriving at a computing system to the local registers of the processing unit or to a memory shared by multiple processing units. The shared memory includes a portion configured as a first-in, first-out (FIFO) buffer for storing packets arriving for the processing unit when its local registers are full. The stored packets are then delivered to the processing unit's one or more registers when the registers become available.Type: ApplicationFiled: February 5, 2016Publication date: August 10, 2017Inventors: Ramon Zuniga, Douglas A. Palmer
-
Publication number: 20170153993Abstract: A multiprocessor architecture utilizing direct memory access (DMA) processors that execute programmed code to feed data to one or more processor cores in advance of those cores requesting data. Stalls of the processor cores are minimized by continually feeding new data directly into the data registers within the cores. When different data is needed, the processor cores can redirect a DMA processor to execute a different feeder program, or to jump to a different point in the feeder program it is already executing. The DMA processors can also feed executable instructions into the instruction pipelines of the processor cores, allowing the feeder program to orchestrate overall processor operations.Type: ApplicationFiled: November 30, 2015Publication date: June 1, 2017Applicant: KNUEDGE, INC.Inventors: Douglas A. Palmer, Jerome Vincent Coffin, Andrew Jonathan White, Ramon Zuniga
-
Publication number: 20170083477Abstract: Systems and methods may be provided to support memory access by packet communication and/or direct memory access. In one aspect, a memory controller may be provided for a processing device containing a plurality of computing resources. The memory controller may comprise a first interface to be coupled to a router. The first interface may be configured to transmit and receive packets. Each packet may comprise a header that may contain a routable address and a packet opcode specifying an operation to be performed in accordance with a network protocol. The memory controller may further comprise a memory bus port coupled to a plurality of memory slots that are configured to receive memory banks to form a memory and a controller core coupled to the first interface. The controller core may be configured to decode a packet received at the first interface and perform an operation specified in the received packet.Type: ApplicationFiled: December 6, 2016Publication date: March 23, 2017Applicant: KnuEdge IncorporatedInventors: Douglas A. Palmer, Ramon Zuniga
-
Patent number: 9552327Abstract: Systems and methods may be provided to support memory access by packet communication and/or direct memory access. In one aspect, a memory controller may be provided for a processing device containing a plurality of computing resources. The memory controller may comprise a first interface to be coupled to a router. The first interface may be configured to transmit and receive packets. Each packet may comprise a header that may contain a routable address and a packet opcode specifying an operation to be performed in accordance with a network protocol. The memory controller may further comprise a memory bus port coupled to a plurality of memory slots that are configured to receive memory banks to form a memory and a controller core coupled to the first interface. The controller core may be configured to decode a packet received at the first interface and perform an operation specified in the received packet.Type: GrantFiled: January 29, 2015Date of Patent: January 24, 2017Assignee: KnuEdge IncorporatedInventors: Douglas A. Palmer, Ramon Zuniga
-
Publication number: 20160224508Abstract: Systems and methods may be provided to support memory access by packet communication and/or direct memory access. In one aspect, a memory controller may be provided for a processing device containing a plurality of computing resources. The memory controller may comprise a first interface to be coupled to a router. The first interface may be configured to transmit and receive packets. Each packet may comprise a header that may contain a routable address and a packet opcode specifying an operation to be performed in accordance with a network protocol. The memory controller may further comprise a memory bus port coupled to a plurality of memory slots that are configured to receive memory banks to form a memory and a controller core coupled to the first interface. The controller core may be configured to decode a packet received at the first interface and perform an operation specified in the received packet.Type: ApplicationFiled: January 29, 2015Publication date: August 4, 2016Applicant: THE INTELLISIS CORPORATIONInventors: Douglas A. PALMER, Ramon ZUNIGA