Patents by Inventor R. Iris Bahar

R. Iris Bahar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230146689
    Abstract: A hardware neural network system includes an input buffer for input neurons (Nbin), an output buffer for output neurons (Nbout), and a third buffer for synaptic weights (SB) connected to a Neural Functional Unit (NFU) and a control logic (CP) for performing synapses and neurons computations. The NFU pipelines a computation into stages, the stages including weight blocks (WB), an adder tree, and a non-linearity function.
    Type: Application
    Filed: December 5, 2022
    Publication date: May 11, 2023
    Inventors: Sherief REDA, Hokchhay TANN, Soheil HASHEMI, R. Iris BAHAR
  • Patent number: 11521047
    Abstract: A hardware neural network system includes an input buffer for input neurons (Nbin), an output buffer for output neurons (Nbout), and a third buffer for synaptic weights (SB) connected to a Neural Functional Unit (NFU) and a control logic (CP) for performing synapses and neurons computations. The NFU pipelines a computation into stages, the stages including weight blocks (WB), an adder tree, and a non-linearity function.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: December 6, 2022
    Assignee: Brown University
    Inventors: Sherief Reda, Hokchhay Tann, Soheil Hashemi, R. Iris Bahar
  • Patent number: 5619418
    Abstract: An integrated circuit, when designed, must adhere to timing constraints while attempting to minimize circuit area. In order to adhere to timing specifications while arriving at a near-optimal circuit surface area, an iterative process is used which selectively increases logic gates sizes by accessing logic gates from a memory stored logic gate library. A circuit representation is read along with timing constraints for circuit paths. Each circuit path in the circuit is processed to find it's actual circuit path delay. A most out-of-specification circuit path (in terms of speed) is chosen in the circuit and a sensitivity calculation is performed for each logic gate in the most out-of-specification circuit path. The logic gate in the circuit path with the maximized sensitivity (sensitivity=.DELTA.speed/.DELTA.area) is increased in size by accessing a larger gate in the library in order to improve speed at the expense of area.
    Type: Grant
    Filed: February 16, 1995
    Date of Patent: April 8, 1997
    Assignee: Motorola, Inc.
    Inventors: David T. Blaauw, Joseph W. Norton, Larry G. Jones, Susanta Misra, R. Iris Bahar
  • Patent number: 5155843
    Abstract: A pipelined CPU executing instructions of variable length, and referencing memory using various data widths. Macroinstruction pipelining is employed (instead of microinstruction pipelining), with queueing between units of the CPU to allow flexibility in instruction execution times. A wide bandwidth is available for memory access; fetching 64-bit data blocks on each cycle. A hierarchical cache arrangement has an improved method of cache set selection, increasing the likelihood of a cache hit. A writeback cache is used (instead of writethrough) and writeback is allowed to proceed even though other accesses are suppressed due to queues being full. A branch prediction method employs a branch history table which records the taken vs. not-taken history of branch opcodes recently used, and uses an empirical algorithm to predict which way the next occurrence of this branch will go, based upon the history table.
    Type: Grant
    Filed: June 29, 1990
    Date of Patent: October 13, 1992
    Assignee: Digital Equipment Corporation
    Inventors: Rebecca L. Stamm, R. Iris Bahar, Michael Callander, Linda Chao, Derrick R. Meyer, Douglas Sanders, Richard L. Sites, Raymond Strouble, Nicholas Wade