Patents by Inventor Richard RICHMOND

Richard RICHMOND has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954879
    Abstract: Methods, apparatus, systems, and articles of manufacture to optimize pipeline execution are disclosed. An example apparatus includes at least one memory, machine readable instructions, and processor circuitry to execute the machine readable instructions to determine a value associated with a first location of a first pixel of a first image and a second location of a second pixel of a second image by calculating a matching cost between the first location and the second location, generate a disparity map including the value, and determine a minimum value based on the disparity map corresponding to a difference in horizontal coordinates between the first location and the second location.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: April 9, 2024
    Assignee: MOVIDIUS LTD.
    Inventors: Vasile Toma, Richard Richmond, Fergal Connor, Brendan Barry
  • Patent number: 11768689
    Abstract: The present application discloses a computing device that can provide a low-power, highly capable computing platform for computational imaging. The computing device can include one or more processing units, for example one or more vector processors and one or more hardware accelerators, an intelligent memory fabric, a peripheral device, and a power management module. The computing device can communicate with external devices, such as one or more image sensors, an accelerometer, a gyroscope, or any other suitable sensor devices.
    Type: Grant
    Filed: November 12, 2021
    Date of Patent: September 26, 2023
    Assignee: Movidius Limited
    Inventors: Brendan Barry, Richard Richmond, Fergal Connor, David Moloney
  • Patent number: 11620726
    Abstract: Methods, systems, apparatus, and articles of manufacture to reduce memory latency when fetching pixel kernels are disclosed. An example apparatus includes first interface circuitry to receive a first request from a hardware accelerator at a first time including first coordinates of a first pixel disposed in a first image block, second interface circuitry to receive a second request including second coordinates from the hardware accelerator at a second time after the first time, and kernel retriever circuitry to, in response to the second request, determine whether the first image block is in cache storage based on a mapping of the second coordinates to a block tag, and, in response to determining that the first image block is in the cache storage, access, in parallel, two or more memory devices associated with the cache storage to transfer a plurality of image blocks including the first image block to the hardware accelerator.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: April 4, 2023
    Assignee: Movidius Limited
    Inventors: Richard Boyd, Richard Richmond
  • Publication number: 20230084866
    Abstract: Methods, apparatus, systems, and articles of manufacture to optimize pipeline execution are disclosed. An example apparatus includes at least one memory, machine readable instructions, and processor circuitry to execute the machine readable instructions to determine a value associated with a first location of a first pixel of a first image and a second location of a second pixel of a second image by calculating a matching cost between the first location and the second location, generate a disparity map including the value, and determine a minimum value based on the disparity map corresponding to a difference in horizontal coordinates between the first location and the second location.
    Type: Application
    Filed: June 24, 2022
    Publication date: March 16, 2023
    Inventors: Vasile Toma, Richard Richmond, Fergal Connor, Brendan Barry
  • Publication number: 20230004430
    Abstract: Technology for estimating neural network (NN) power profiles includes obtaining a plurality of workloads for a compiled NN model, the plurality of workloads determined for a hardware execution device, determining a hardware efficiency factor for the compiled NN model, and generating, based on the hardware efficiency factor, a power profile for the compiled NN model on one or more of a per-layer basis or a per-workload basis. The hardware efficiency factor can be determined on based on a hardware efficiency measurement and a hardware utilization measurement, and can be determined on a per-workload basis. A configuration file can be provided for generating the power profile, and an output visualization of the power profile can be generated. Further, feedback information can be generated to perform one or more of selecting a hardware device, optimizing a breakdown of workloads, optimizing a scheduling of tasks, or confirming a hardware device design.
    Type: Application
    Filed: July 2, 2022
    Publication date: January 5, 2023
    Inventors: Richard Richmond, Eric Luk, Lingdan Zeng, Lance Hacking, Alessandro Palla, Mohamed Elmalaki, Sara Almalih
  • Publication number: 20220391710
    Abstract: Systems, apparatuses and methods may provide for technology that determines a complexity of a task associated with a neural network workload and generates a hardware efficiency estimate for the task, wherein the hardware efficiency estimate is generated via a neural network based cost model if the complexity exceeds a threshold, and wherein the hardware efficiency estimate is generated via a cost function if the complexity does not exceed the threshold. In one example, the technology trains the neural network based cost model based on one or more of hardware profile data or register-transfer level (RTL) data.
    Type: Application
    Filed: August 18, 2022
    Publication date: December 8, 2022
    Applicant: Intel Corporation
    Inventors: Alessandro Palla, Ian Frederick Hunter, Richard Richmond, Cormac Brick, Sebastian Eusebiu Nagy
  • Patent number: 11380005
    Abstract: Methods, apparatus, systems, and articles of manufacture to optimize pipeline execution are disclosed. An example apparatus includes a cost computation manager to determine a value associated with a first location of a first pixel of a first image and a second location of a second pixel of a second image by calculating a matching cost between the first location and the second location, and an aggregation generator to generate a disparity map including the value, and determine a minimum value based on the disparity map corresponding to a difference in horizontal coordinates between the first location and the second location.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: July 5, 2022
    Assignee: Movidius Limited
    Inventors: Vasile Toma, Richard Richmond, Fergal Connor, Brendan Barry
  • Publication number: 20220188969
    Abstract: Methods, systems, apparatus, and articles of manufacture to reduce memory latency when fetching pixel kernels are disclosed. An example apparatus includes first interface circuitry to receive a first request from a hardware accelerator at a first time including first coordinates of a first pixel disposed in a first image block, second interface circuitry to receive a second request including second coordinates from the hardware accelerator at a second time after the first time, and kernel retriever circuitry to, in response to the second request, determine whether the first image block is in cache storage based on a mapping of the second coordinates to a block tag, and, in response to determining that the first image block is in the cache storage, access, in parallel, two or more memory devices associated with the cache storage to transfer a plurality of image blocks including the first image block to the hardware accelerator.
    Type: Application
    Filed: October 26, 2021
    Publication date: June 16, 2022
    Inventors: Richard Boyd, Richard Richmond
  • Publication number: 20220179657
    Abstract: The present application discloses a computing device that can provide a low-power, highly capable computing platform for computational imaging. The computing device can include one or more processing units, for example one or more vector processors and one or more hardware accelerators, an intelligent memory fabric, a peripheral device, and a power management module. The computing device can communicate with external devices, such as one or more image sensors, an accelerometer, a gyroscope, or any other suitable sensor devices.
    Type: Application
    Filed: November 12, 2021
    Publication date: June 9, 2022
    Inventors: Brendan Barry, Richard Richmond, Fergal Connor, David Moloney
  • Patent number: 11188343
    Abstract: The present application discloses a computing device that can provide a low-power, highly capable computing platform for computational imaging. The computing device can include one or more processing units, for example one or more vector processors and one or more hardware accelerators, an intelligent memory fabric, a peripheral device, and a power management module. The computing device can communicate with external devices, such as one or more image sensors, an accelerometer, a gyroscope, or any other suitable sensor devices.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: November 30, 2021
    Assignee: Movidius Limited
    Inventors: Brendan Barry, Richard Richmond, Fergal Connor, David Moloney
  • Patent number: 11170463
    Abstract: Methods, systems, apparatus, and articles of manufacture to reduce memory latency when fetching pixel kernels are disclosed. An example apparatus includes a prefetch kernel retriever to generate a block tag based on a first request from a hardware accelerator, the first request including first coordinates of a first pixel disposed in a first image block, a memory interface engine to store the first image block including a plurality of pixels including the pixel in a cache storage based on the block tag, and a kernel retriever to access two or more memory devices included in the cache storage in parallel to transfer a plurality of image blocks including the first image block when a second request is received including second coordinates of a second pixel disposed in the first image block.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: November 9, 2021
    Assignee: MOVIDIUS LIMITED
    Inventors: Richard Boyd, Richard Richmond
  • Publication number: 20200241881
    Abstract: The present application discloses a computing device that can provide a low-power, highly capable computing platform for computational imaging. The computing device can include one or more processing units, for example one or more vector processors and one or more hardware accelerators, an intelligent memory fabric, a peripheral device, and a power management module. The computing device can communicate with external devices, such as one or more image sensors, an accelerometer, a gyroscope, or any other suitable sensor devices.
    Type: Application
    Filed: December 18, 2019
    Publication date: July 30, 2020
    Inventors: Brendan Barry, Richard Richmond, Fergal Connor, David Moloney
  • Publication number: 20200226776
    Abstract: Methods, apparatus, systems, and articles of manufacture to optimize pipeline execution are disclosed. An example apparatus includes a cost computation manager to determine a value associated with a first location of a first pixel of a first image and a second location of a second pixel of a second image by calculating a matching cost between the first location and the second location, and an aggregation generator to generate a disparity map including the value, and determine a minimum value based on the disparity map corresponding to a difference in horizontal coordinates between the first location and the second location.
    Type: Application
    Filed: May 18, 2018
    Publication date: July 16, 2020
    Inventors: Vasile Toma, Richard Richmond, Fergal Connor, Brendan Barry
  • Publication number: 20200175646
    Abstract: Methods, systems, apparatus, and articles of manufacture to reduce memory latency when fetching pixel kernels are disclosed. An example apparatus includes a prefetch kernel retriever to generate a block tag based on a first request from a hardware accelerator, the first request including first coordinates of a first pixel disposed in a first image block, a memory interface engine to store the first image block including a plurality of pixels including the pixel in a cache storage based on the block tag, and a kernel retriever to access two or more memory devices included in the cache storage in parallel to transfer a plurality of image blocks including the first image block when a second request is received including second coordinates of a second pixel disposed in the first image block.
    Type: Application
    Filed: May 18, 2018
    Publication date: June 4, 2020
    Inventors: Richard Boyd, Richard Richmond
  • Patent number: 10585803
    Abstract: Cache memory mapping techniques are presented. A cache may contain an index configuration register. The register may configure the locations of an upper index portion and a lower index portion of a memory address. The portions may be combined to create a combined index. The configurable split-index addressing structure may be used, among other applications, to reduce the rate of cache conflicts occurring between multiple processors decoding the video frame in parallel.
    Type: Grant
    Filed: February 4, 2019
    Date of Patent: March 10, 2020
    Assignee: Movidius Limited
    Inventor: Richard Richmond
  • Patent number: 10521238
    Abstract: The present application discloses a computing device that can provide a low-power, highly capable computing platform for computational imaging. The computing device can include one or more processing units, for example one or more vector processors and one or more hardware accelerators, an intelligent memory fabric, a peripheral device, and a power management module. The computing device can communicate with external devices, such as one or more image sensors, an accelerometer, a gyroscope, or any other suitable sensor devices.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: December 31, 2019
    Assignee: Movidius Limited
    Inventors: Brendan Barry, Richard Richmond, Fergal Connor, David Moloney
  • Publication number: 20190370005
    Abstract: The present application relates generally to a parallel processing device. The parallel processing device can include a plurality of processing elements, a memory subsystem, and an interconnect system. The memory subsystem can include a plurality of memory slices, at least one of which is associated with one of the plurality of processing elements and comprises a plurality of random access memory (RAM) tiles, each tile having individual read and write ports. The interconnect system is configured to couple the plurality of processing elements and the memory subsystem. The interconnect system includes a local interconnect and a global interconnect.
    Type: Application
    Filed: June 18, 2019
    Publication date: December 5, 2019
    Inventors: David Moloney, Richard Richmond, David Donohoe, Brendan Barry
  • Publication number: 20190251032
    Abstract: Cache memory mapping techniques are presented. A cache may contain an index configuration register. The register may configure the locations of an upper index portion and a lower index portion of a memory address. The portions may be combined to create a combined index. The configurable split-index addressing structure may be used, among other applications, to reduce the rate of cache conflicts occurring between multiple processors decoding the video frame in parallel.
    Type: Application
    Filed: February 4, 2019
    Publication date: August 15, 2019
    Inventor: Richard Richmond
  • Patent number: 10360040
    Abstract: The present application relates generally to a parallel processing device. The parallel processing device can include a plurality of processing elements, a memory subsystem, and an interconnect system. The memory subsystem can include a plurality of memory slices, at least one of which is associated with one of the plurality of processing elements and comprises a plurality of random access memory (RAM) tiles, each tile having individual read and write ports. The interconnect system is configured to couple the plurality of processing elements and the memory subsystem. The interconnect system includes a local interconnect and a global interconnect.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: July 23, 2019
    Assignee: Movidius, LTD.
    Inventors: David Moloney, Richard Richmond, David Donohoe, Brendan Barry
  • Patent number: 10198359
    Abstract: Cache memory mapping techniques are presented. A cache may contain an index configuration register. The register may configure the locations of an upper index portion and a lower index portion of a memory address. The portions may be combined to create a combined index. The configurable split-index addressing structure may be used, among other applications, to reduce the rate of cache conflicts occurring between multiple processors decoding the video frame in parallel.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: February 5, 2019
    Assignee: Linear Algebra Technologies, Limited
    Inventor: Richard Richmond