Patents by Inventor Michael A. O'Connor

Michael A. O'Connor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12223201
    Abstract: A hierarchical network enables access for a stacked memory system including or more memory dies that each include multiple memory tiles. The processor die includes multiple processing tiles that are stacked with the one or more memory die. The memory tiles that are vertically aligned with a processing tile are directly coupled to the processing tile and comprise the local memory block for the processing tile. The hierarchical network provides access paths for each processing tile to access the processing tile's local memory block, the local memory block coupled to a different processing tile within the same processing die, memory tiles in a different die stack, and memory tiles in a different device. The ratio of memory bandwidth (byte) to floating-point operation (B:F) may improve 50× for accessing the local memory block compared with conventional memory. Additionally, the energy consumed to transfer each bit may be reduced by 10×.
    Type: Grant
    Filed: February 9, 2024
    Date of Patent: February 11, 2025
    Assignee: NVIDIA Corporation
    Inventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
  • Publication number: 20250037186
    Abstract: One embodiment sets forth a technique for performing matrix operations. The technique includes traversing a tree structure to access one or more non-empty regions within a matrix. The tree structure includes a first plurality of nodes and a second plurality of nodes corresponding to non-empty regions in the matrix. The first plurality of nodes includes a first node representing a first region and one or more second nodes that are children of the first node and represent second region(s) with an equal size formed within the first region. The second plurality of nodes include a third node representing a third region and one or more fourth nodes that are children of the third node and represent fourth region(s) with substantially equal numbers of non-zero matrix values formed within the third region. The technique also includes performing matrix operation(s) based on the non-empty region(s) to generate a matrix operation result.
    Type: Application
    Filed: October 15, 2024
    Publication date: January 30, 2025
    Inventors: Hanrui WANG, James Michael O'CONNOR, Donghyuk LEE
  • Patent number: 12211080
    Abstract: One embodiment sets forth a technique for performing matrix operations. The technique includes traversing a tree structure to access one or more non-empty regions within a matrix. The tree structure includes a first plurality of nodes and a second plurality of nodes corresponding to non-empty regions in the matrix. The first plurality of nodes includes a first node representing a first region and one or more second nodes that are children of the first node and represent second region(s) with an equal size formed within the first region. The second plurality of nodes include a third node representing a third region and one or more fourth nodes that are children of the third node and represent fourth region(s) with substantially equal numbers of non-zero matrix values formed within the third region. The technique also includes performing matrix operation(s) based on the non-empty region(s) to generate a matrix operation result.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: January 28, 2025
    Assignee: NVIDIA CORPORATION
    Inventors: Hanrui Wang, James Michael O'Connor, Donghyuk Lee
  • Patent number: 12201499
    Abstract: A myringotomy device includes a housing; an elongated tube extending from the housing; and a retractable cutting tool extendable through the elongated tube, the cutting tool comprising a blade. The cutting tool is configured such that when advanced, the blade of the cutting tool extends beyond a distal end of the elongated tube. The cutting tool is also configured such that when retracted, the blade is retracted into the elongated tube and a fluid conduit is created from the distal end of the elongated tube to the housing.
    Type: Grant
    Filed: January 3, 2023
    Date of Patent: January 21, 2025
    Assignee: Gyrus ACMI, Inc.
    Inventors: Steve Gaynes, Michael O'Connor, Michael DeRossi, Gilberto Cavada, John Morici, Riyad Moe
  • Patent number: 12183280
    Abstract: Embodiments of the disclosed subject matter provide a device that includes an organic light emitting device (OLED), and a drive circuit to control the operation of the OLED, comprising a response time accelerator thin film transistor (TFT) configured to short or reverse bias the OLED for a predetermined period of time during a frame time. Other embodiments include an OLED having a plurality of sub-pixels, where one or more of sub-pixels configured to emit light of at least a first color comprises a first emissive area and a second emissive area that are independently controllable, where the first emissive area is larger than the second emissive area. The controller is configured to control the second emissive area to have (i) a higher brightness, and/or (ii) a higher current density than the first emissive area for a first sub-pixel luminance level that is less than a maximum luminance.
    Type: Grant
    Filed: June 21, 2022
    Date of Patent: December 31, 2024
    Assignee: Universal Display Corporation
    Inventors: Michael Hack, Michael Stuart Weaver, Nicholas J. Thompson, Michael O'Connor
  • Publication number: 20240416248
    Abstract: A system and a method for providing rewards to a user for in-game achievements across multiple third-party game platforms are disclosed. The system includes a user account module for storing user profiles, an SDK module integrated with each game platform for defining in-game achievements and associated rewards, and an API module for receiving information about these achievements. The API module generates an achievement signal when a user completes an achievement. A processing module receives this signal, determines the corresponding rewards using the SDK module, and adds these rewards to the user's profile via the user account module, providing a personalized gaming experience across multiple platforms.
    Type: Application
    Filed: June 13, 2023
    Publication date: December 19, 2024
    Inventors: Kelvin Troy, John Michael O'Connor
  • Publication number: 20240411709
    Abstract: Embodiments of the present disclosure relate to application partitioning for locality in a stacked memory system. In an embodiment, one or more memory dies are stacked on the processor die. The processor die includes multiple processing tiles and each memory die includes multiple memory tiles. Vertically aligned memory tiles are directly coupled to and comprise the local memory block for a corresponding processing tile. An application program that operates on dense multi-dimensional arrays (matrices) may partition the dense arrays into sub-arrays associated with program tiles. Each program tile is executed by a processing tile using the processing tile's local memory block to process the associated sub-array. Data associated with each sub-array is stored in a local memory block and the processing tile corresponding to the local memory block executes the program tile to process the sub-array data.
    Type: Application
    Filed: August 21, 2024
    Publication date: December 12, 2024
    Inventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
  • Publication number: 20240390796
    Abstract: A system and a method for managing an avatar across multiple three-dimensional (3D) rendering platforms are disclosed. The system includes a content database, a content delivery module, a Software Development Kit (SDK) module, and an Application Programming Interface (API) module. The content database stores a 3D avatar model and associated first assets. The SDK module integrates with an in-engine avatar customization module of each 3D rendering platform and allows users to use these assets to customize their avatar during runtime. The API module receives requests for avatar and asset data. The content delivery module fetches requested avatar data and assets in response to requests received at the API module and delivers the requests to the SDK module. Additionally, the content delivery module fetches, stores, and manages second assets used for avatar customization from the SDK module.
    Type: Application
    Filed: May 25, 2023
    Publication date: November 28, 2024
    Inventors: Kelvin Troy, John Michael O'Connor, Kenan Cain Chabuk, Kamuran Chabuk
  • Patent number: 12141229
    Abstract: One embodiment sets forth a technique for performing one or more matrix multiplication operations based on a first matrix and a second matrix. The technique includes receiving data associated with the first matrix from a first traversal engine that accesses nonzero elements included in the first matrix via a first tree structure. The technique also includes performing one or more computations on the data associated with the first matrix and the data associated with the second matrix to produce a plurality of partial results. The technique further includes combining the plurality of partial results into one or more intermediate results and storing the one or more intermediate results in a first buffer memory.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: November 12, 2024
    Assignee: NVIDIA Corporation
    Inventors: Hanrui Wang, James Michael O'Connor, Donghyuk Lee
  • Patent number: 12141451
    Abstract: Embodiments of the present disclosure relate to memory page access instrumentation for generating a memory access profile. The memory access profile may be used to co-locate data near the processing unit that accesses the data, reducing memory access energy by minimizing distances to access data that is co-located with a different processing unit (i.e., remote data). Execution thread arrays and memory pages for execution of a program are partitioned across multiple processing units. The partitions are then each mapped to a specific processing unit to minimize inter-partition traffic given the processing unit physical topology.
    Type: Grant
    Filed: February 1, 2023
    Date of Patent: November 12, 2024
    Assignee: NVIDIA Corporation
    Inventors: Niladrish Chatterjee, Zachary Joseph Susskind, Donghyuk Lee, James Michael O'Connor
  • Patent number: 12099453
    Abstract: Embodiments of the present disclosure relate to application partitioning for locality in a stacked memory system. In an embodiment, one or more memory dies are stacked on the processor die. The processor die includes multiple processing tiles and each memory die includes multiple memory tiles. Vertically aligned memory tiles are directly coupled to and comprise the local memory block for a corresponding processing tile. An application program that operates on dense multi-dimensional arrays (matrices) may partition the dense arrays into sub-arrays associated with program tiles. Each program tile is executed by a processing tile using the processing tile's local memory block to process the associated sub-array. Data associated with each sub-array is stored in a local memory block and the processing tile corresponding to the local memory block executes the program tile to process the sub-array data.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: September 24, 2024
    Assignee: NVIDIA Corporation
    Inventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
  • Publication number: 20240281300
    Abstract: An initiating processing tile generates an offload request that may include a processing tile ID, source data needed for the computation, program counter, and destination location where the computation result is stored. The offload processing tile may execute the offloaded computation. Alternatively, the offload processing tile may deny the offload request based on congestion criteria. The congestion criteria may include a processing workload measure, whether a resource needed to perform the computation is available, and an offload request buffer fullness. In an embodiment, the denial message that is returned to the initiating processing tile may include the data needed to perform the computation (read from the local memory of the offload processing tile). Returning the data with the denial message results in the same inter-processing tile traffic that would occur if no attempt to offload the computation were initiated.
    Type: Application
    Filed: December 4, 2023
    Publication date: August 22, 2024
    Inventors: Donghyuk Lee, Leul Wuletaw Belayneh, Niladrish Chatterjee, James Michael O'Connor
  • Publication number: 20240256153
    Abstract: Embodiments of the present disclosure relate to memory page access instrumentation for generating a memory access profile. The memory access profile may be used to co-locate data near the processing unit that accesses the data, reducing memory access energy by minimizing distances to access data that is co-located with a different processing unit (i.e., remote data). Execution thread arrays and memory pages for execution of a program are partitioned across multiple processing units. The partitions are then each mapped to a specific processing unit to minimize inter-partition traffic given the processing unit physical topology.
    Type: Application
    Filed: February 1, 2023
    Publication date: August 1, 2024
    Inventors: Niladrish Chatterjee, Zachary Joseph Susskind, Donghyuk Lee, James Michael O'Connor
  • Patent number: 12046794
    Abstract: A balun is enhanced with design features that extend the operational bandwidth of the balun allowing the balun to operate at lower frequencies. The design enhancements also suppress resonances that otherwise cause sudden power drops at a resonance frequency while a load is connected between the balun's differential outputs.
    Type: Grant
    Filed: February 17, 2022
    Date of Patent: July 23, 2024
    Assignee: MACOM Technology Solutions Holdings, Inc.
    Inventors: Michael O'Connor, Jean-Marc Mourant
  • Publication number: 20240231753
    Abstract: Intelligent voice response systems and methods may include one or more machine readable instructions stored in a memory that cause a processor to receive an automated input including at least two of the following: a vehicle metric of the vehicle, a driving score of a user of the vehicle, a driving time during a trip of the vehicle, a geographical location of the vehicle, an adverse weather event within a predetermined distance of the vehicle, a historical driving route of the vehicle, a predicted driving route of the vehicle within a first predetermined period of time, or a sound within a predetermined distance of the vehicle. An action may be generated and implemented and reception of an affirmative response from the user may be determined. An alert may be generated to the user based on the affirmative response.
    Type: Application
    Filed: October 20, 2023
    Publication date: July 11, 2024
    Inventors: Michael Steven Watson, Tai-Yip Kwok, Michael O'Connor
  • Publication number: 20240211166
    Abstract: A hierarchical network enables access for a stacked memory system including or more memory dies that each include multiple memory tiles. The processor die includes multiple processing tiles that are stacked with the one or more memory die. The memory tiles that are vertically aligned with a processing tile are directly coupled to the processing tile and comprise the local memory block for the processing tile. The hierarchical network provides access paths for each processing tile to access the processing tile's local memory block, the local memory block coupled to a different processing tile within the same processing die, memory tiles in a different die stack, and memory tiles in a different device. The ratio of memory bandwidth (byte) to floating-point operation (B:F) may improve 50× for accessing the local memory block compared with conventional memory. Additionally, the energy consumed to transfer each bit may be reduced by 10×.
    Type: Application
    Filed: February 9, 2024
    Publication date: June 27, 2024
    Inventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
  • Patent number: 12001725
    Abstract: A combined on-package and off-package memory system uses a custom base-layer within which are fabricated one or more dedicated interfaces to off-package memories. An on-package processor and on-package memories are also directly coupled to the custom base-layer. The custom base-layer includes memory management logic between the processor and memories (both off and on package) to steer requests. The memories are exposed as a combined memory space having greater bandwidth and capacity compared with either the off-package memories or the on-package memories alone. The memory management logic services requests while maintaining quality of service (QoS) to satisfy bandwidth requirements for each allocation. An allocation may include any combination of the on and/or off package memories. The memory management logic also manages data migration between the on and off package memories.
    Type: Grant
    Filed: August 23, 2023
    Date of Patent: June 4, 2024
    Assignee: NVIDIA Corporation
    Inventors: Niladrish Chatterjee, James Michael O'Connor, Donghyuk Lee, Gaurav Uttreja, Wishwesh Anil Gandhi
  • Publication number: 20240163227
    Abstract: Aspects of the subject disclosure may include, for example, a database being maintained that has information indicating states of network resources, which can be determined based on physical activities, logical activities and hybrid activities performed on or by the network resources; obtaining activity information for a particular network resource, where the activity information is a physical activity, a logical activity and/or a hybrid activity; and determining whether a state change for the particular network resource should be made such as to whether the activity information corresponds to and warrants change to at least one of an inventory state, an operational state, or a detailed state. Other embodiments are disclosed.
    Type: Application
    Filed: March 6, 2023
    Publication date: May 16, 2024
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Ernest Bayha, Aaron Harris, Brian Horen, Nathan Skinner, Enhsing Lin, Theresa Michael, David Whitney, Jeff Johnson, Laurie Mitsanas, Michael O'Connor
  • Patent number: 11977766
    Abstract: A hierarchical network enables access for a stacked memory system including or more memory dies that each include multiple memory tiles. The processor die includes multiple processing tiles that are stacked with the one or more memory die. The memory tiles that are vertically aligned with a processing tile are directly coupled to the processing tile and comprise the local memory block for the processing tile. The hierarchical network provides access paths for each processing tile to access the processing tile's local memory block, the local memory block coupled to a different processing tile within the same processing die, memory tiles in a different die stack, and memory tiles in a different device. The ratio of memory bandwidth (byte) to floating-point operation (B:F) may improve 50× for accessing the local memory block compared with conventional memory. Additionally, the energy consumed to transfer each bit may be reduced by 10×.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: May 7, 2024
    Assignee: NVIDIA Corporation
    Inventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
  • Patent number: D1031820
    Type: Grant
    Filed: November 28, 2023
    Date of Patent: June 18, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas Burns, Jonathan Howard Biddle, Michael O'Connor