Patents by Inventor Alexander Smith Neckar

Alexander Smith Neckar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11783169
    Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.
    Type: Grant
    Filed: January 2, 2023
    Date of Patent: October 10, 2023
    Assignee: Femtosense, Inc.
    Inventors: Sam Brian Fok, Alexander Smith Neckar
  • Patent number: 11775810
    Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.
    Type: Grant
    Filed: January 2, 2023
    Date of Patent: October 3, 2023
    Assignee: Femtosense, Inc.
    Inventors: Sam Brian Fok, Alexander Smith Neckar
  • Publication number: 20230153595
    Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.
    Type: Application
    Filed: January 2, 2023
    Publication date: May 18, 2023
    Applicant: Femtosense, Inc.
    Inventors: Sam Brian Fok, Alexander Smith Neckar
  • Publication number: 20230153596
    Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.
    Type: Application
    Filed: January 2, 2023
    Publication date: May 18, 2023
    Applicant: Femtosense, Inc.
    Inventors: Sam Brian Fok, Alexander Smith Neckar
  • Publication number: 20230133088
    Abstract: Methods and apparatus for multi-purpose neural network core and memory. The asynchronous/parallel nature of neural network tasks may allow a neural network IP core to dynamically switch between: a system memory (in whole or part), a neural network processor (in whole or part), and/or a hybrid of system memory and neural network processor. In one specific implementation, the multi-purpose neural network IP core has partitioned its sub-cores into a first set of neural network sub-cores, and a second set of memory sub-cores that operate as addressable memory space. Partitioning may be statically assigned at “compile-time”, dynamically assigned at “run-time”, or semi-statically assigned at “program-time” Any number of considerations may be used to partition the sub-cores; examples of such considerations may include, without limitation: thread priority, memory usage, historic usage, future usage, power consumption, performance, etc.
    Type: Application
    Filed: October 25, 2022
    Publication date: May 4, 2023
    Applicant: Femtosense, Inc.
    Inventors: Sam Brian Fok, Alexander Smith Neckar, Gabriel Vega
  • Patent number: 11625592
    Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.
    Type: Grant
    Filed: July 5, 2021
    Date of Patent: April 11, 2023
    Assignee: Femtosense, Inc.
    Inventors: Sam Brian Fok, Alexander Smith Neckar
  • Publication number: 20220012060
    Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.
    Type: Application
    Filed: July 5, 2021
    Publication date: January 13, 2022
    Applicant: Femtosense, Inc.
    Inventors: Sam Brian Fok, Alexander Smith Neckar
  • Publication number: 20220012575
    Abstract: Methods and apparatus for localized processing within multicore neural networks. Unlike existing solutions that rely on commodity software and hardware to perform “brute force” large scale neural network processing the various techniques described herein map and partition a neural network into the hardware limitations of a target platform. Specifically, the various implementations described herein synergistically leverage localization, sparsity, and distributed scheduling, to enable neural network processing within embedded hardware applications. As described herein, hardware-aware mapping/partitioning enhances neural network performance by e.g., avoiding pin-limited memory accesses, processing data in compressed formats/skipping unnecessary operations, and decoupling scheduling between cores.
    Type: Application
    Filed: July 5, 2021
    Publication date: January 13, 2022
    Applicant: Femtosense, Inc.
    Inventors: Sam Brian Fok, Alexander Smith Neckar, Scott Henry Reid
  • Publication number: 20220012598
    Abstract: Methods and apparatus for matrix and vector storage and operations are disclosed. Vectors and matrices may be represented differently to further enhance performance of operations. Exemplary embodiments compress sparse neural network data structures based on actual, non-null, connectivity (rather than all possible connections). This greatly reduces storage requirements as well as computational complexity. In some variants, the compression and reduction in complexity is sized to fit within the memory footprint and processing capabilities of a core. The exemplary compression schemes represent sparse matrices with links to compressed column data structures, where each compressed column data structure only stores non-null entries to optimize column-based lookups of non-null entries. Similarly, sparse vector addressing skips nulled entries to optimize for vector-specific non-null multiply-accumulate operations.
    Type: Application
    Filed: July 5, 2021
    Publication date: January 13, 2022
    Applicant: Femtosense, Inc.
    Inventors: Sam Brian Fok, Alexander Smith Neckar, Manish Shrivastava
  • Publication number: 20200019839
    Abstract: Methods and apparatus for spiking neural network computing based on e.g., a multi-layer kernel architecture, shared dendritic encoding, and/or thresholding of accumulated spiking signals. In one embodiment, a thresholding accumulator is disclosed that reduces spiking activity between different stages of a neuromorphic processor. Spiking activity can be directly related to power consumption and signal-to-noise ratio (SNR); thus, various embodiments trade-off the costs and benefits associated with threshold accumulation. For example, reducing spiking activity (e.g., by a factor of 10) during an encoding stage can have minimal impact on downstream fidelity (SNR) for a decoding stage, while yielding substantial improvements in power consumption.
    Type: Application
    Filed: July 10, 2019
    Publication date: January 16, 2020
    Inventors: Kwabena Adu Boahen, Sam Brian Fok, Alexander Smith Neckar, Ben Varkey Benjamin Pottayil, Terrence Stewart, Nick Nirmal Oza, Rajit Manohar, Christopher David Eliasmith
  • Publication number: 20200019837
    Abstract: Methods and apparatus for spiking neural network computing based on e.g., a multi-layer kernel architecture, shared dendritic encoding, and/or thresholding of accumulated spiking signals. In one exemplary embodiment, a multi-layer mixed-signal kernel is disclosed that uses different characteristics of its constituent stages to perform neuromorphic computing. Specifically, analog domain processing inexpensively provides diversity, speed, and efficiency, whereas digital domain processing enables a variety of complex logical manipulations (e.g., digital noise rejection, error correction, arithmetic manipulations, etc.). Isolating different processing techniques into different stages between the layers of a multi-layer kernel results in substantial operational efficiencies.
    Type: Application
    Filed: July 10, 2019
    Publication date: January 16, 2020
    Inventors: Kwabena Adu Boahen, Sam Brian Fok, Alexander Smith Neckar, Ben Varkey Benjamin Pottayil, Terrence Charles Stewart, Nick Nirmal Oza, Rajit Manohar, Christopher David Eliasmith