Patents by Inventor Heather Marie Ames
Heather Marie Ames has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240289625Abstract: Lifelong Deep Neural Network (L-DNN) technology revolutionizes Deep Learning by enabling fast, post-deployment learning without extensive training, heavy computing resources, or massive data storage. It uses a representation-rich, DNN-based subsystem (Module A) with a fast-learning subsystem (Module B) to learn new features quickly without forgetting previously learned features. Compared to a conventional DNN, L-DNN uses much less data to build robust networks, dramatically shorter training time, and learning on-device instead of on servers. It can add new knowledge without re-training or storing data. As a result, an edge device with L-DNN can learn continuously after deployment, eliminating massive costs in data collection and annotation, memory and data storage, and compute power. This fast, local, on-device learning can be used for security, supply chain monitoring, disaster and emergency response, and drone-based inspection of infrastructure and properties, among other applications.Type: ApplicationFiled: January 31, 2024Publication date: August 29, 2024Applicant: Neurala, Inc.Inventors: Matthew Luciw, Santiago OLIVERA, Anatoly Gorshechnikov, Jeremy Wurbs, Heather Marie Ames, Massimiliano Versace
-
Patent number: 11928602Abstract: Lifelong Deep Neural Network (L-DNN) technology revolutionizes Deep Learning by enabling fast, post-deployment learning without extensive training, heavy computing resources, or massive data storage. It uses a representation-rich, DNN-based subsystem (Module A) with a fast-learning subsystem (Module B) to learn new features quickly without forgetting previously learned features. Compared to a conventional DNN, L-DNN uses much less data to build robust networks, dramatically shorter training time, and learning on-device instead of on servers. It can add new knowledge without re-training or storing data. As a result, an edge device with L-DNN can learn continuously after deployment, eliminating massive costs in data collection and annotation, memory and data storage, and compute power. This fast, local, on-device learning can be used for security, supply chain monitoring, disaster and emergency response, and drone-based inspection of infrastructure and properties, among other applications.Type: GrantFiled: May 9, 2018Date of Patent: March 12, 2024Assignee: Neurala, Inc.Inventors: Matthew Luciw, Santiago Olivera, Anatoly Gorshechnikov, Jeremy Wurbs, Heather Marie Ames, Massimiliano Versace
-
Publication number: 20180330238Abstract: Lifelong Deep Neural Network (L-DNN) technology revolutionizes Deep Learning by enabling fast, post-deployment learning without extensive training, heavy computing resources, or massive data storage. It uses a representation-rich, DNN-based subsystem (Module A) with a fast-learning subsystem (Module B) to learn new features quickly without forgetting previously learned features. Compared to a conventional DNN, L-DNN uses much less data to build robust networks, dramatically shorter training time, and learning on-device instead of on servers. It can add new knowledge without re-training or storing data. As a result, an edge device with L-DNN can learn continuously after deployment, eliminating massive costs in data collection and annotation, memory and data storage, and compute power. This fast, local, on-device learning can be used for security, supply chain monitoring, disaster and emergency response, and drone-based inspection of infrastructure and properties, among other applications.Type: ApplicationFiled: May 9, 2018Publication date: November 15, 2018Inventors: Matthew Luciw, Santiago OLIVERA, Anatoly GORSHECHNIKOV, Jeremy WURBS, Heather Marie AMES, Massimiliano VERSACE
-
Publication number: 20170076194Abstract: Conventionally, robots are typically either programmed to complete tasks using a programming language (either text or graphical), shown what to do for repetitive tasks, or operated remotely by a user. The present technology replaces or augments conventional robot programming and control by enabling a user to define a hardware-agnostic brain that uses Artificial Intelligence (AI) systems, machine vision systems, and neural networks to control a robot based on sensory input acquired by the robot's sensors. The interface for defining the brain allows the user to create behaviors from combinations of sensor stimuli and robot actions, or responses, and to group these behaviors to form brains. An Application Program Interface (API) underneath the interface translates the behaviors' inputs and outputs into API calls and commands specific to particular robots. This allows the user to port brains among different types of robot to robot without knowing specifics of the robot commands.Type: ApplicationFiled: November 4, 2016Publication date: March 16, 2017Inventors: Massimiliano Versace, Roger Matus, Alexandrea Defreitas, John Michael Amadeo, Tim Seemann, Ethan Marsh, Heather Marie Ames, Anatoli GORCHETCHNIKOV
-
Patent number: 9189828Abstract: An accelerator system is implemented on an expansion card comprising a printed circuit board having (a) one or more graphics processing units (GPUs), (b) two or more associated memory banks (logically or physically partitioned), (c) a specialized controller, and (d) a local bus providing signal coupling compatible with the PCI industry standards. The controller handles most of the primitive operations to set up and control GPU computation. Thus, the computer's central processing unit (CPU) can be dedicated to other tasks. In this case a few controls (simulation start and stop signals from the CPU and the simulation completion signal back to CPU), GPU programs and input/output data are exchanged between CPU and the expansion card. Moreover, since on every time step of the simulation the results from the previous time step are used but not changed, the results are preferably transferred back to CPU in parallel with the computation.Type: GrantFiled: January 3, 2014Date of Patent: November 17, 2015Assignee: Neurala, Inc.Inventors: Anatoli Gorchetchnikov, Heather Marie Ames, Massimiliano Versace, Fabrizio Santini
-
Publication number: 20140192073Abstract: An accelerator system is implemented on an expansion card comprising a printed circuit board having (a) one or more graphics processing units (GPU), (b) two or more associated memory banks (logically or physically partitioned), (c) a specialized controller, and (d) a local bus providing signal coupling compatible with the PCI industry standards (this includes but is not limited to PCI-Express, PCI-X, USB 2.0, or functionally similar technologies). The controller handles most of the primitive operations needed to set up and control GPU computation. As a result, the computer's central processing unit (CPU) is freed from this function and is dedicated to other tasks. In this case a few controls (simulation start and stop signals from the CPU and the simulation completion signal back to CPU), GPU programs and input/output data are the information exchanged between CPU and the expansion card.Type: ApplicationFiled: January 3, 2014Publication date: July 10, 2014Applicant: Neurala Inc.Inventors: Anatoli Gorchetchnikov, Heather Marie Ames, Massimiliano Versace, Fabrizio Santini
-
Patent number: 8648867Abstract: An accelerator system is implemented on an expansion card comprising a printed circuit board having (a) one or more graphics processing units (GPU), (b) two or more associated memory banks (logically or physically partitioned), (c) a specialized controller, and (d) a local bus providing signal coupling compatible with the PCI industry standards (this includes but is not limited to PCI-Express, PCI-X, USB 2.0, or functionally similar technologies). The controller handles most of the primitive operations needed to set up and control GPU computation. As a result, the computer's central processing unit (CPU) is freed from this function and is dedicated to other tasks. In this case a few controls (simulation start and stop signals from the CPU and the simulation completion signal back to CPU), GPU programs and input/output data are the information exchanged between CPU and the expansion card.Type: GrantFiled: September 24, 2007Date of Patent: February 11, 2014Assignee: Neurala LLCInventors: Anatoli Gorchetchnikov, Heather Marie Ames, Massimiliano Versace, Fabrizio Santini
-
Publication number: 20080117220Abstract: An accelerator system is implemented on an expansion card comprising a printed circuit board having (a) one or more graphics processing units (GPU), (b) two or more associated memory banks (logically or physically partitioned), (c) a specialized controller, and (d) a local bus providing signal coupling compatible with the PCI industry standards (this includes but is not limited to PCI-Express, PCI-X, USB 2.0, or functionally similar technologies). The controller handles most of the primitive operations needed to set up and control GPU computation. As a result, the computer's central processing unit (CPU) is freed from this function and is dedicated to other tasks. In this case a few controls (simulation start and stop signals from the CPU and the simulation completion signal back to CPU), GPU programs and input/output data are the information exchanged between CPU and the expansion card.Type: ApplicationFiled: September 24, 2007Publication date: May 22, 2008Applicant: Neurala LLCInventors: Anatoli Gorchetchnikov, Heather Marie Ames, Massimiliano Versace, Fabrizio Santini
-
Patent number: RE48438Abstract: An accelerator system is implemented on an expansion card comprising a printed circuit board having (a) one or more graphics processing units (GPUs), (b) two or more associated memory banks (logically or physically partitioned), (c) a specialized controller, and (d) a local bus providing signal coupling compatible with the PCI industry standards. The controller handles most of the primitive operations to set up and control GPU computation. Thus, the computer's central processing unit (CPU) can be dedicated to other tasks. In this case a few controls (simulation start and stop signals from the CPU and the simulation completion signal back to CPU), GPU programs and input/output data are exchanged between CPU and the expansion card. Moreover, since on every time step of the simulation the results from the previous time step are used but not changed, the results are preferably transferred back to CPU in parallel with the computation.Type: GrantFiled: November 9, 2017Date of Patent: February 16, 2021Assignee: Neurala, Inc.Inventors: Anatoli Gorchetchnikov, Heather Marie Ames, Massimiliano Versace, Fabrizio Santini
-
Patent number: RE49461Abstract: An accelerator system is implemented on an expansion card comprising a printed circuit board having (a) one or more graphics processing units (GPUs), (b) two or more associated memory banks (logically or physically partitioned), (c) a specialized controller, and (d) a local bus providing signal coupling compatible with the PCI industry standards. The controller handles most of the primitive operations to set up and control GPU computation. Thus, the computer's central processing unit (CPU) can be dedicated to other tasks. In this case a few controls (simulation start and stop signals from the CPU and the simulation completion signal back to CPU), GPU programs and input/output data are exchanged between CPU and the expansion card. Moreover, since on every time step of the simulation the results from the previous time step are used but not changed, the results are preferably transferred back to CPU in parallel with the computation.Type: GrantFiled: December 29, 2020Date of Patent: March 14, 2023Assignee: Neurala, Inc.Inventors: Anatoli Gorchetchnikov, Heather Marie Ames, Massimiliano Versace, Fabrizio Santini