Patents by Inventor Alexander Smith Neckar
Alexander Smith Neckar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11783169Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.Type: GrantFiled: January 2, 2023Date of Patent: October 10, 2023Assignee: Femtosense, Inc.Inventors: Sam Brian Fok, Alexander Smith Neckar
-
Patent number: 11775810Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.Type: GrantFiled: January 2, 2023Date of Patent: October 3, 2023Assignee: Femtosense, Inc.Inventors: Sam Brian Fok, Alexander Smith Neckar
-
Publication number: 20230153595Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.Type: ApplicationFiled: January 2, 2023Publication date: May 18, 2023Applicant: Femtosense, Inc.Inventors: Sam Brian Fok, Alexander Smith Neckar
-
Publication number: 20230153596Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.Type: ApplicationFiled: January 2, 2023Publication date: May 18, 2023Applicant: Femtosense, Inc.Inventors: Sam Brian Fok, Alexander Smith Neckar
-
Publication number: 20230133088Abstract: Methods and apparatus for multi-purpose neural network core and memory. The asynchronous/parallel nature of neural network tasks may allow a neural network IP core to dynamically switch between: a system memory (in whole or part), a neural network processor (in whole or part), and/or a hybrid of system memory and neural network processor. In one specific implementation, the multi-purpose neural network IP core has partitioned its sub-cores into a first set of neural network sub-cores, and a second set of memory sub-cores that operate as addressable memory space. Partitioning may be statically assigned at “compile-time”, dynamically assigned at “run-time”, or semi-statically assigned at “program-time” Any number of considerations may be used to partition the sub-cores; examples of such considerations may include, without limitation: thread priority, memory usage, historic usage, future usage, power consumption, performance, etc.Type: ApplicationFiled: October 25, 2022Publication date: May 4, 2023Applicant: Femtosense, Inc.Inventors: Sam Brian Fok, Alexander Smith Neckar, Gabriel Vega
-
Patent number: 11625592Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.Type: GrantFiled: July 5, 2021Date of Patent: April 11, 2023Assignee: Femtosense, Inc.Inventors: Sam Brian Fok, Alexander Smith Neckar
-
Publication number: 20220012060Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.Type: ApplicationFiled: July 5, 2021Publication date: January 13, 2022Applicant: Femtosense, Inc.Inventors: Sam Brian Fok, Alexander Smith Neckar
-
Publication number: 20220012575Abstract: Methods and apparatus for localized processing within multicore neural networks. Unlike existing solutions that rely on commodity software and hardware to perform “brute force” large scale neural network processing the various techniques described herein map and partition a neural network into the hardware limitations of a target platform. Specifically, the various implementations described herein synergistically leverage localization, sparsity, and distributed scheduling, to enable neural network processing within embedded hardware applications. As described herein, hardware-aware mapping/partitioning enhances neural network performance by e.g., avoiding pin-limited memory accesses, processing data in compressed formats/skipping unnecessary operations, and decoupling scheduling between cores.Type: ApplicationFiled: July 5, 2021Publication date: January 13, 2022Applicant: Femtosense, Inc.Inventors: Sam Brian Fok, Alexander Smith Neckar, Scott Henry Reid
-
Publication number: 20220012598Abstract: Methods and apparatus for matrix and vector storage and operations are disclosed. Vectors and matrices may be represented differently to further enhance performance of operations. Exemplary embodiments compress sparse neural network data structures based on actual, non-null, connectivity (rather than all possible connections). This greatly reduces storage requirements as well as computational complexity. In some variants, the compression and reduction in complexity is sized to fit within the memory footprint and processing capabilities of a core. The exemplary compression schemes represent sparse matrices with links to compressed column data structures, where each compressed column data structure only stores non-null entries to optimize column-based lookups of non-null entries. Similarly, sparse vector addressing skips nulled entries to optimize for vector-specific non-null multiply-accumulate operations.Type: ApplicationFiled: July 5, 2021Publication date: January 13, 2022Applicant: Femtosense, Inc.Inventors: Sam Brian Fok, Alexander Smith Neckar, Manish Shrivastava
-
Publication number: 20200019839Abstract: Methods and apparatus for spiking neural network computing based on e.g., a multi-layer kernel architecture, shared dendritic encoding, and/or thresholding of accumulated spiking signals. In one embodiment, a thresholding accumulator is disclosed that reduces spiking activity between different stages of a neuromorphic processor. Spiking activity can be directly related to power consumption and signal-to-noise ratio (SNR); thus, various embodiments trade-off the costs and benefits associated with threshold accumulation. For example, reducing spiking activity (e.g., by a factor of 10) during an encoding stage can have minimal impact on downstream fidelity (SNR) for a decoding stage, while yielding substantial improvements in power consumption.Type: ApplicationFiled: July 10, 2019Publication date: January 16, 2020Inventors: Kwabena Adu Boahen, Sam Brian Fok, Alexander Smith Neckar, Ben Varkey Benjamin Pottayil, Terrence Stewart, Nick Nirmal Oza, Rajit Manohar, Christopher David Eliasmith
-
Publication number: 20200019837Abstract: Methods and apparatus for spiking neural network computing based on e.g., a multi-layer kernel architecture, shared dendritic encoding, and/or thresholding of accumulated spiking signals. In one exemplary embodiment, a multi-layer mixed-signal kernel is disclosed that uses different characteristics of its constituent stages to perform neuromorphic computing. Specifically, analog domain processing inexpensively provides diversity, speed, and efficiency, whereas digital domain processing enables a variety of complex logical manipulations (e.g., digital noise rejection, error correction, arithmetic manipulations, etc.). Isolating different processing techniques into different stages between the layers of a multi-layer kernel results in substantial operational efficiencies.Type: ApplicationFiled: July 10, 2019Publication date: January 16, 2020Inventors: Kwabena Adu Boahen, Sam Brian Fok, Alexander Smith Neckar, Ben Varkey Benjamin Pottayil, Terrence Charles Stewart, Nick Nirmal Oza, Rajit Manohar, Christopher David Eliasmith