Patents by Inventor Jose G. Delgado-Frias
Jose G. Delgado-Frias has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20090141711Abstract: An interleaved multistage switching fabric includes Y multistage switching fabric panels, where Y is an integer greater than one. Each panel has primary inputs for receiving cells to be routed, local outputs for outputting routed cells, primary outputs for outputting non-routed cells, and reentry points for introducing non-routed cells into the panel. The switching fabric additionally includes at least one demultiplexer subsystem communicatively coupled to primary inputs of each panel, for interfacing the switching fabric with input lines. The switching fabric further includes at least one multiplexer subsystem communicatively coupled to local outputs of each panel, for interfacing the switching fabric with destination queues. The switching fabric additionally includes Y recirculation connections, where each recirculation connection communicatively couples primary outputs of one panel to reentry points of another panel.Type: ApplicationFiled: November 26, 2008Publication date: June 4, 2009Applicant: WASHINGTON STATE UNIVERSITY RESEARCH FOUNDATIONInventors: Rongsen He, Jose G. Delgado-Frias
-
Patent number: 5612908Abstract: Image processing for multimedia workstations is a computationally intensive task requiring special purpose hardware to meet the high speed requirements associated with the task. One type of specialized hardware that meets the computation high speed requirements is the mesh connected computer. Such a computer becomes a massively parallel machine when an array of computers interconnected by a network are replicated in a machine. The nearest neighbor mesh computer consists of an N.times.N square array of Processor Elements(PEs) where each PE is connected to the North, South, East and West PEs only. Assuming a single wire interface between PEs, there are a total of 2N.sup.2 wires in the mesh structure.Type: GrantFiled: February 9, 1994Date of Patent: March 18, 1997Assignee: International Business Machines CorporationInventors: Gerald G. Pechanek, Stamatis Vassiliadis, Jose G. Delgado-Frias
-
Patent number: 5613044Abstract: A Neural synapse processor apparatus having a neuron architecture for the synapse processing elements of the apparatus. The apparatus which we prefer will have a N neuron structure having synapse processing units that contain instruction and data storage units, receive instructions and data, and execute instructions. The N neuron structure should contain communicating adder trees, neuron activation function units, and an arrangement for communicating both instructions, data, and the outputs of neuron activation function units back to the input synapse processing units by means of the communicating adder trees. The apparatus can be structured as a bit-serial or word parallel system. The preferred structure contains N.sup.2 synapse processing units, each associated with a connection weight in the N neural network to be emulated, placed in the form of a N by N matrix that has been folded along the diagonal and made up of diagonal cells and general cells.Type: GrantFiled: June 2, 1995Date of Patent: March 18, 1997Assignee: International Business Machines CorporationInventors: Gerald G. Pechanek, Stamatis Vassiliadis, Jose G. Delgado-Frias
-
Patent number: 5517596Abstract: A Neural synapse processor apparatus having a neuron architecture for the synapse processing elements of the apparatus. The apparatus which we prefer will have a N neuron structure having synapse processing units that contain instruction and data storage units, receive instructions and data, and execute instructions. The N neuron structure should contain communicating adder trees, neuron activation function units, and an arrangement for communicating both instructions, data, and the outputs of neuron activation function units back to the input synapse processing units by means of the communicating adder trees. The apparatus can be structured as a bit-serial or word parallel system. The preferred structure contains N.sup.2 synapse processing units, each associated with a connection weight in the N neural network to be emulated, placed in the form of a N by N matrix that has been folded along the diagonal and made up of diagonal cells and general cells.Type: GrantFiled: December 1, 1993Date of Patent: May 14, 1996Assignee: International Business Machines CorporationInventors: Gerald G. Pechanek, Stamatis Vassiliadis, Jose G. Delgado-Frias
-
Patent number: 5483620Abstract: A Neural synapse processor apparatus having a neuron architecture for the synapse processing elements of the apparatus. The apparatus which we prefer will have a N neuron structure having synapse processing units that contain instruction and data storage units, receive instructions and data, and execute instructions. The N neuron structure should contain communicating adder trees, neuron activation function units, and an arrangement for communicating both instructions, data, and the outputs of neuron activation function units back to the input synapse processing units by means of the communicating adder trees. The apparatus can be structured as a bit-serial or word parallel system. The preferred structure contains N.sup.2 synapse processing units, each associated with a connection weight in the N neural network to be emulated, placed in the form of a N by N matrix that has been folded along the diagonal and made up of diagonal cells and general cells.Type: GrantFiled: March 9, 1995Date of Patent: January 9, 1996Assignee: International Business Machines Corp.Inventors: Gerald G. Pechanek, Stamatis Vassiliadis, Jose G. Delgado-Frias
-
Patent number: 5337395Abstract: A neural network architecture consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neuron processing elements. Neural networks are modelled using a sequential pipelined neurocomputer producing high performance with minimum hardware by sequentially processing each neuron in the completely connected network model. An N neuron network is implemented using multipliers, a pipelined adder tree structure, and activation functions. The activation functions are provided by using one activation function module and sequentially passing the N input product summations sequentially through it. One bus provides N.times.N communications by sequentially providing N neuron values to the multiplier registers. The neuron values are ensured of reaching corresponding multipliers through a tag compare function. The neuron information includes a source tag and a valid signal.Type: GrantFiled: April 8, 1991Date of Patent: August 9, 1994Assignee: International Business Machines CorporationInventors: Stamatis Vassiliadis, Gerald G. Pechanek, Jose G. Delgado-Frias
-
Patent number: 5329611Abstract: A scalable flow virtual learning neurocomputer system and appratus with a scalable hybrid control flow/data flow employing a group partitioning algorithm, and a scalable virtual learning architecture, synapse processor architecture mapping, inner square folding and array separation, with capability of back propagation for virtual learning. The group partitioning algorithm creates a common building block of synapse processors containing their own external memory. The processor groups are used to create a general purpose virtual learning machine which maintains complete connectivity with high performance. The synapse processor group allows a system to be scalable in virtual size and direct execution capabilities. Internal to the processor group, the synapse processors are designed as a hybrid control flow/data flow architecture with external memory access and reduced synchronization problems.Type: GrantFiled: June 21, 1993Date of Patent: July 12, 1994Assignee: International Business Machines Corp.Inventors: Gerald G. Pechanek, Stamatis Vassiliadis, Jose G. Delgado-Frias
-
Patent number: 5325464Abstract: The Pyramid Learning Architecture Neurocomputer (PLAN) is a scalable stacked pyramid arrangement of processor arrays. There are six processing levels in PLAN consisting of the pyramid base, Level 6, containing N.sup.2 SYnapse Processors (SYPs), Level 5 containing multiple folded Communicating Adder Tree structures (SCATs), Level 4 made up of N completely connected Neuron Execution Processors (NEPs), Level 3 made up of multiple Programmable Communicating Alu Tree (PCATs) structures, similar to Level 5 SCATs but with programmable function capabilities in each tree node, Level 2 containing the Neuron Instruction Processor (NIP), and Level 1 comprising the Host and user interface. The simplest processors are in the base level with each layer of processors increasing in computational power up to a general purpose host computer acting as the user interface. PLAN is scalable in direct neural network emulation and in virtual processing capability.Type: GrantFiled: June 18, 1993Date of Patent: June 28, 1994Assignee: International Business Machines CorporationInventors: Gerald G. Pechanek, Stamatis Vassiliadis, Jose G. Delgado-Frias
-
Patent number: 5243688Abstract: The architectures for a scalable neural processor (SNAP) and a Triangular Scalable Neural Array Processor (T-SNAP) are expanded to handle network simulations where the number of neurons to be modeled exceeds the number of physical neurons implemented. This virtual neural processing is described for three general virtual architectural approaches for handling the virtual neurons, one for SNAP and one for TSNAP, and a third approach applied to both SNAP and TSNAP.Type: GrantFiled: May 17, 1991Date of Patent: September 7, 1993Assignee: International Business Machines CorporationInventors: Gerald G. Pechanek, Stamatis Vassiliadis, Jose G. Delgado-Frias