Patents by Inventor David Asher

David Asher has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240149694
    Abstract: The present solution provides a digital first responder cut loop. The present solution can include a system comprising a control unit executing instructions and configured to generate a signal through a harness of an electric vehicle. The signal can be indicative of a state of a cut loop of the electric vehicle. The control unit can be electrically connected to restraint control module and a battery pack associated with the electric vehicle. The controller can be configured to detect an event associated with the vehicle or a change in the signal above a threshold level. The controller can be configured to, responsive to the detection of one of the event or the change in the signal, facilitate electrical disconnection of the restraint control module of the electric vehicle or the battery pack of the electric vehicle.
    Type: Application
    Filed: November 7, 2022
    Publication date: May 9, 2024
    Inventors: Rahul Rajesh Tiwari, Simon David Asher, Martin Brian Majkut, Nicholas Paul Tokarz
  • Publication number: 20240140479
    Abstract: An adverse weather operations system includes one or more sensors and a computing device that processes sensor data to assist a vehicle in planning a vehicle path. Weather metrics can be utilized to determine a perception model having the most accurate results for the conditions being encountered by the vehicle.
    Type: Application
    Filed: October 26, 2023
    Publication date: May 2, 2024
    Inventors: Zachary David Asher, Nicholas Alan Goberville, Parth Kadav
  • Patent number: 11967227
    Abstract: Methods and systems are provided to generate instructions and share video content following an impact event. They include detecting an impact event associated with a vehicle and transmitting a request to at least one video recording device within an area of interest from the vehicle. They further include receiving video content captured within a time period around the impact event from at least one video recording device. Methods and systems are also provided to generate useful information following an impact event including detecting an impact event associated with a vehicle and determining one or more contextual parameters associated with the impact event. They further include generating display content based on the one or more contextual parameters and displaying the generated display content on a screen within the vehicle.
    Type: Grant
    Filed: December 8, 2022
    Date of Patent: April 23, 2024
    Assignee: Rivian IP Holdings, LLC
    Inventors: Rahul Rajesh Tiwari, Nicholas Paul Tokarz, Martin Brian Majkut, Simon David Asher, Erik Robert Glaser
  • Publication number: 20240069578
    Abstract: A control system is configured to monitor operation of a flow control, like a control valve. These configurations can use of continuous or real-time data to evaluate fitness or function of the device under operating conditions. This feature can alert operators to problems or issues with one or more devices, or process lines in total. These problems may, for example, indicate that a valve is incorrectly sized for actual working conditions. As a result, engineers may find that the valve is too big (or oversize) or too small (or undersize) because the design process for layout of the process line relies upon a design load that reflects a future maximum (plus some factor of safety), and not the actual working conditions that might prevail once the device is in service in the field.
    Type: Application
    Filed: August 31, 2022
    Publication date: February 29, 2024
    Inventors: Vladimir Dimitrov Kostadinov, Joseph Georges Shahda, Cyril Nicolas Vlassoff, Jeremy Asher Glaun, David Chunhe Zhou, Christopher Bittner, Justin Walter Betley
  • Patent number: 11868262
    Abstract: A memory request, including an address, is accessed. The memory request also specifies a type of an operation (e.g., a read or write) associated with an instance (e.g., a block) of data. A group of caches is selected using a bit or bits in the address. A first hash of the address is performed to select a cache in the group. A second hash of the address is performed to select a set of cache lines in the cache. Unless the operation results in a cache miss, the memory request is processed at the selected cache. When there is a cache miss, a third hash of the address is performed to select a memory controller, and a fourth hash of the address is performed to select a bank group and a bank in memory.
    Type: Grant
    Filed: February 9, 2023
    Date of Patent: January 9, 2024
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Richard E. Kessler, David Asher, Shubhendu S Mukherjee, Wilson P. Snyder, II, David Carlson, Jason Zebchuk, Isam Akkawi
  • Publication number: 20230229595
    Abstract: Systems and methods of multi-chip processing with low latency and congestion. In a multi-chip processing system, each chip includes a plurality of clusters arranged in a mesh design. A respective interconnect controller is disposed at the end of each column. The column is linked to a corresponding remote column in the other chip. A share cache controller in the column is paired with a corresponding cache controller in the remote column, the pair of cache controllers are configured to control data caching for a same set of main memory locations. Communications between cross-chip cache controllers are performed within linked columns of clusters via the column-specific inter-chip interconnect controllers.
    Type: Application
    Filed: March 20, 2023
    Publication date: July 20, 2023
    Inventors: Craig BARNER, David ASHER, Richard KESSLER, Bradley DOBBIE, Daniel DEVER, Thomas F. HUMMEL, Isam AKKAWI
  • Publication number: 20230185720
    Abstract: A memory request, including an address, is accessed. The memory request also specifies a type of an operation (e.g., a read or write) associated with an instance (e.g., a block) of data. A group of caches is selected using a bit or bits in the address. A first hash of the address is performed to select a cache in the group. A second hash of the address is performed to select a set of cache lines in the cache. Unless the operation results in a cache miss, the memory request is processed at the selected cache. When there is a cache miss, a third hash of the address is performed to select a memory controller, and a fourth hash of the address is performed to select a bank group and a bank in memory.
    Type: Application
    Filed: February 9, 2023
    Publication date: June 15, 2023
    Inventors: Richard E. Kessler, David Asher, Shubhendu S. Mukherjee, Wilson P. Snyder, II, David Carlson, Jason Zebchuk, Isam Akkawi
  • Patent number: 11620223
    Abstract: Systems and methods of multi-chip processing with low latency and congestion. In a multi-chip processing system, each chip includes a plurality of clusters arranged in a mesh design. A respective interconnect controller is disposed at the end of each column. The column is linked to a corresponding remote column in the other chip. A share cache controller in the column is paired with a corresponding cache controller in the remote column, the pair of cache controllers are configured to control data caching for a same set of main memory locations. Communications between cross-chip cache controllers are performed within linked columns of clusters via the column-specific inter-chip interconnect controllers.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: April 4, 2023
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Craig Barner, David Asher, Richard Kessler, Bradley Dobbie, Daniel Dever, Thomas F. Hummel, Isam Akkawi
  • Patent number: 11615027
    Abstract: A memory request, including an address, is accessed. The memory request also specifies a type of an operation (e.g., a read or write) associated with an instance (e.g., a block) of data. A group of caches is selected using a bit or bits in the address. A first hash of the address is performed to select a cache in the group. A second hash of the address is performed to select a set of cache lines in the cache. Unless the operation results in a cache miss, the memory request is processed at the selected cache. When there is a cache miss, a third hash of the address is performed to select a memory controller, and a fourth hash of the address is performed to select a bank group and a bank in memory.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: March 28, 2023
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Richard E. Kessler, David Asher, Shubhendu S. Mukherjee, Wilson P. Snyder, II, David Carlson, Jason Zebchuk, Isam Akkawi
  • Patent number: 11379379
    Abstract: Described is a computing system and method for differential cache block sizing for computing systems. The method for differential cache block sizing includes determining, upon a cache miss at a cache, a number of available cache blocks given a payload length of the main memory and a cache block size for the last level cache, generating a main memory request including at least one indicator for a missed cache block and any available cache blocks, sending the main memory request to the main memory to obtain data associated with the missed cache block and each of the any available cache blocks, storing the data received for the missed cache block in the cache; and storing the data received for each of the any available cache blocks in the cache depending on a cache replacement algorithm.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: July 5, 2022
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Shubhendu Mukherjee, David Asher, Thomas F. Hummel
  • Patent number: 11327759
    Abstract: Managing the messages associated with memory pages stored in a main memory includes: receiving a message from outside the pipeline, and providing at least one low-level instruction to the pipeline for performing an operation indicated by the received message. Executing instructions in the pipeline includes: executing a series of low-level instructions in the pipeline, where the series of low-level instructions includes a first (second) set of low-level instructions converted from a first (second) high-level instruction.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: May 10, 2022
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: David Albert Carlson, Shubhendu Sekhar Mukherjee, Michael Bertone, David Asher, Daniel Dever, Bradley D. Dobbie, Thomas Hummel
  • Publication number: 20220114101
    Abstract: A memory request, including an address, is accessed. The memory request also specifies a type of an operation (e.g., a read or write) associated with an instance (e.g., a block) of data. A group of caches is selected using a bit or bits in the address. A first hash of the address is performed to select a cache in the group. A second hash of the address is performed to select a set of cache lines in the cache. Unless the operation results in a cache miss, the memory request is processed at the selected cache. When there is a cache miss, a third hash of the address is performed to select a memory controller, and a fourth hash of the address is performed to select a bank group and a bank in memory.
    Type: Application
    Filed: November 18, 2021
    Publication date: April 14, 2022
    Inventors: Richard E. KESSLER, David ASHER, Shubhendu S. MUKHERJEE, Wilson P. SNYDER, II, David CARLSON, Jason ZEBCHUK, Isam AKKAWI
  • Publication number: 20210374057
    Abstract: Systems and methods of multi-chip processing with low latency and congestion. In a multi-chip processing system, each chip includes a plurality of clusters arranged in a mesh design. A respective interconnect controller is disposed at the end of each column. The column is linked to a corresponding remote column in the other chip. A share cache controller in the column is paired with a corresponding cache controller in the remote column, the pair of cache controllers are configured to control data caching for a same set of main memory locations. Communications between cross-chip cache controllers are performed within linked columns of clusters via the column-specific inter-chip interconnect controllers.
    Type: Application
    Filed: August 12, 2021
    Publication date: December 2, 2021
    Inventors: Craig BARNER, David ASHER, Richard KESSLER, Bradley DOBBIE, Daniel DEVER, Thomas F. HUMMEL, Isam AKKAWI
  • Patent number: 11188466
    Abstract: A memory request, including an address, is accessed. The memory request also specifies a type of an operation (e.g., a read or write) associated with an instance (e.g., a block) of data. A group of caches is selected using a bit or bits in the address. A first hash of the address is performed to select a cache in the group. A second hash of the address is performed to select a set of cache lines in the cache. Unless the operation results in a cache miss, the memory request is processed at the selected cache. When there is a cache miss, a third hash of the address is performed to select a memory controller, and a fourth hash of the address is performed to select a bank group and a bank in memory.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: November 30, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Richard E. Kessler, David Asher, Shubhendu S. Mukherjee, Wilson P. Snyder, II, David Carlson, Jason Zebchuk, Isam Akkawi
  • Patent number: 11170630
    Abstract: Aspects provide methods to discretely and intelligently make a user aware of a service or product offered by a subscriber without distracting the user with a full advertisement. More specifically, aspects provide methods and systems for selectively outputting an audio chime in response to a trigger event, wherein the audio chime provides an indication of a service or product offered by the subscriber. In response to a user's acceptance of the audio chime, a second chime, further engaging the user, is output.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: November 9, 2021
    Assignee: BOSE CORPORATION
    Inventors: David Asher, Todd Reily
  • Patent number: 11119929
    Abstract: Systems and methods of multi-chip processing with low latency and congestion. In a multi-chip processing system, each chip includes a plurality of clusters arranged in a mesh design. A respective interconnect controller is disposed at the end of each column. The column is linked to a corresponding remote column in the other chip. A share cache controller in the column is paired with a corresponding cache controller in the remote column, the pair of cache controllers are configured to control data caching for a same set of main memory locations. Communications between cross-chip cache controllers are performed within linked columns of clusters via the column-specific inter-chip interconnect controllers.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: September 14, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Craig Barner, David Asher, Richard Kessler, Bradley Dobbie, Daniel Dever, Thomas F. Hummel, Isam Akkawi
  • Publication number: 20210035434
    Abstract: Aspects provide methods to discretely and intelligently make a user aware of a service or product offered by a subscriber without distracting the user with a full advertisement. More specifically, aspects provide methods and systems for selectively outputting an audio chime in response to a trigger event, wherein the audio chime provides an indication of a service or product offered by the subscriber. In response to a user's acceptance of the audio chime, a second chime, further engaging the user, is output.
    Type: Application
    Filed: July 31, 2019
    Publication date: February 4, 2021
    Applicant: BOSE CORPORATION
    Inventors: David ASHER, Todd REILY
  • Publication number: 20200250088
    Abstract: Systems and methods of multi-chip processing with low latency and congestion. In a multi-chip processing system, each chip includes a plurality of clusters arranged in a mesh design. A respective interconnect controller is disposed at the end of each column. The column is linked to a corresponding remote column in the other chip. A share cache controller in the column is paired with a corresponding cache controller in the remote column, the pair of cache controllers are configured to control data caching for a same set of main memory locations. Communications between cross-chip cache controllers are performed within linked columns of clusters via the column-specific inter-chip interconnect controllers.
    Type: Application
    Filed: January 31, 2019
    Publication date: August 6, 2020
    Inventors: Craig Barner, David Asher, Richard Kessler, Brad Dobbie, Daniel Dever, Tom Hummel, Isam Akkawi
  • Publication number: 20200183844
    Abstract: A memory request, including an address, is accessed. The memory request also specifies a type of an operation (e.g., a read or write) associated with an instance (e.g., a block) of data. A group of caches is selected using a bit or bits in the address. A first hash of the address is performed to select a cache in the group. A second hash of the address is performed to select a set of cache lines in the cache. Unless the operation results in a cache miss, the memory request is processed at the selected cache. When there is a cache miss, a third hash of the address is performed to select a memory controller, and a fourth hash of the address is performed to select a bank group and a bank in memory.
    Type: Application
    Filed: February 11, 2020
    Publication date: June 11, 2020
    Inventors: Richard E. KESSLER, David ASHER, Shubhendu S. MUKHERJEE, Wilson P. SNYDER, II, David CARLSON, Jason ZEBCHUK, Isam AKKAWI
  • Publication number: 20200097292
    Abstract: Managing the messages associated with memory pages stored in a main memory includes: receiving a message from outside the pipeline, and providing at least one low-level instruction to the pipeline for performing an operation indicated by the received message. Executing instructions in the pipeline includes: executing a series of low-level instructions in the pipeline, where the series of low-level instructions includes a first (second) set of low-level instructions converted from a first (second) high-level instruction.
    Type: Application
    Filed: September 25, 2018
    Publication date: March 26, 2020
    Inventors: David Albert Carlson, Shubhendu Sekhar MUKHERJEE, Michael BERTONE, David Asher, Daniel DEVER, Bradley D. DOBBIE, Tom HUMMEL