Patents by Inventor Eric T. Anderson
Eric T. Anderson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11524498Abstract: In some examples, a fluid dispensing device includes a plurality of fluidic actuators and a decoder to detect that a first fluidic actuator is to be activated, and detect that a sense measurement is to be performed. In response to detecting that the first fluidic actuator is to be activated and the sense measurement is to be performed, the decoder is to suppress activation of the first fluidic actuator at a first time, and activate the first fluidic actuator at a second time corresponding to a sense measurement interval to perform the sense measurement of the first fluidic actuator.Type: GrantFiled: April 6, 2018Date of Patent: December 13, 2022Assignee: Hewlett-Packard Development Company, L.P.Inventors: Daryl E. Anderson, Eric T. Martin
-
Patent number: 11510446Abstract: A flame resistant shirt includes an outer layer that has a main body portion and a pair of sleeves. The main body portion terminates in a bottom edge. The flame resistant shirt also includes an inner curtain that is coupled to an inner surface of the main body portion and has a top edge and an opposing bottom edge. The top edge of the inner curtain is located at a waist region of the main body portion and the inner curtain extends downwardly therefrom and the bottom edge of the inner curtain is disposed above the bottom edge of the main body portion. The inner curtain is intended to be tucked into a bottom garment, while the outer layer is intended to be worn untucked.Type: GrantFiled: July 2, 2020Date of Patent: November 29, 2022Assignee: Saudi Arabian Oil CompanyInventors: Aniela Zarzar Torano, Eric T. Anderson, Fatimah M. Barnawi, Mohammed S. AbaHareth
-
Publication number: 20220256257Abstract: In some examples, a system includes an article of personal protective equipment (PPE) having at least one sensor configured to generate a stream of usage data; and an analytical stream processing component comprising: a communication component that receives the stream of usage data; a memory configured to store at least a portion of the stream of usage data and at least one model for detecting a safety event signature, wherein the at least one model is trained based as least in part on a set of usage data generated by one or more other articles of PPE of a same type as the article of PPE; and one or more computer processors configured to: detect the safety event signature in the stream of usage data based on processing the stream of usage data with the model, and generate an output in response to detecting the safety event signature.Type: ApplicationFiled: April 26, 2022Publication date: August 11, 2022Inventors: Steven T. Awiszus, Eric C. Lobner, Michael G. Wurm, Kiran S. Kanukurthy, Jia Hu, Matthew J. Blackford, Keith G. Mattson, Ronald D. Jesme, Nathan J. Anderson
-
Publication number: 20220000201Abstract: A flame resistant shirt includes an outer layer that has a main body portion and a pair of sleeves. The main body portion terminates in a bottom edge. The flame resistant shirt also includes an inner curtain that is coupled to an inner surface of the main body portion and has a top edge and an opposing bottom edge. The top edge of the inner curtain is located at a waist region of the main body portion and the inner curtain extends downwardly therefrom and the bottom edge of the inner curtain is disposed above the bottom edge of the main body portion. The inner curtain is intended to be tucked into a bottom garment, while the outer layer is intended to be worn untucked.Type: ApplicationFiled: July 2, 2020Publication date: January 6, 2022Inventors: Aniela Zarzar Torano, Eric T. Anderson, Fatimah M. Barnawi, Mohammed S. AbaHareth
-
Publication number: 20210042212Abstract: A protocol designer for a test and measurement instrument, comprising an input to receive a signal, a memory configured to store the signal, an author configured to generate protocol definitions based on a user input, a debugger configured to output textual and visual decode results based on the protocol definitions and the signal, and a deployer configured to output a complied protocol definition file to the test and measurement instrument.Type: ApplicationFiled: March 13, 2019Publication date: February 11, 2021Applicant: Tektronix, Inc.Inventors: Mark Anderson Smith, Michael Scott Silliman, Andrew Loofburrow, Eric T. Anderson
-
Patent number: 10628284Abstract: Disclosed herein are systems and methods for converting physical input signals into bitstreams using syntax trees regardless of the physical input signal's protocol. Using declarative language definitions within a protocol declaration, a test and measurement system can compile a syntax tree that automatically translates the input data into a proper bitstream output. The declarative language definitions within the protocol declaration allow custom or standard protocol rules to be written for multiple or arbitrary input protocols without writing unsafe functions, having to access memory, or debugging more complex language codes.Type: GrantFiled: April 9, 2018Date of Patent: April 21, 2020Assignee: Tektronix, Inc.Inventors: Mark Anderson Smith, Michael Scott Silliman, Andrew Loofburrow, Eric T. Anderson
-
Publication number: 20180307584Abstract: Disclosed herein are systems and methods for converting physical input signals into bitstreams using syntax trees regardless of the physical input signal's protocol. Using declarative language definitions within a protocol declaration, a test and measurement system can compile a syntax tree that automatically translates the input data into a proper bitstream output. The declarative language definitions within the protocol declaration allow custom or standard protocol rules to be written for multiple or arbitrary input protocols without writing unsafe functions, having to access memory, or debugging more complex language codes.Type: ApplicationFiled: April 9, 2018Publication date: October 25, 2018Applicant: Tektronix, Inc.Inventors: Mark Anderson Smith, Michael Scott Silliman, Andrew Loofburrow, Eric T. Anderson
-
Patent number: 10043230Abstract: Computer and graphics processing elements, connected generally in series, form a pipeline. Circuit elements known as di/dt throttles are inserted within the pipeline at strategic locations where the potential exists for data flow to transition from an idle state to a maximum data processing rate. The di/dt throttles gently ramp the rate of data flow from idle to a typical level. Disproportionate current draw and the consequent voltage droop are thus avoided, allowing an increased frequency of operation to be realized.Type: GrantFiled: September 20, 2013Date of Patent: August 7, 2018Assignee: NVIDIA CORPORATIONInventors: Philip Payman Shirvani, Peter Sommers, Eric T. Anderson
-
Patent number: 10032246Abstract: A texture processing pipeline is configured to store decoded texture data within a cache unit in order to expedite the processing of texture requests. When a texture request is processed, the texture processing pipeline queries the cache unit to determine whether the requested data is resident in the cache. If the data is not resident in the cache unit, a cache miss occurs. The texture processing pipeline then reads encoded texture data from global memory, decodes that data, and writes different portions of the decoded memory into the cache unit at specific locations according to a caching map. If the data is, in fact, resident in the cache unit, a cache hit occurs, and the texture processing pipeline then reads decoded portions of the requested texture data from the cache unit and combines those portions according to the caching map.Type: GrantFiled: October 9, 2013Date of Patent: July 24, 2018Assignee: NVIDIA CORPORATIONInventors: Eric T. Anderson, Poornachandra Rao
-
Patent number: 9720858Abstract: A texture processing pipeline can be configured to service memory access requests that represent texture data access operations or generic data access operations. When the texture processing pipeline receives a memory access request that represents a texture data access operation, the texture processing pipeline may retrieve texture data based on texture coordinates. When the memory access request represents a generic data access operation, the texture pipeline extracts a virtual address from the memory access request and then retrieves data based on the virtual address. The texture processing pipeline is also configured to cache generic data retrieved on behalf of a group of threads and to then invalidate that generic data when the group of threads exits.Type: GrantFiled: December 19, 2012Date of Patent: August 1, 2017Assignee: NVIDIA CORPORATIONInventors: Brian Fahs, Eric T. Anderson, Nick Barrow-Williams, Shirish Gadre, Joel James McCormack, Bryon S. Nordquist, Nirmal Raj Saxena, Lacky V. Shah
-
Patent number: 9697006Abstract: A texture processing pipeline can be configured to service memory access requests that represent texture data access operations or generic data access operations. When the texture processing pipeline receives a memory access request that represents a texture data access operation, the texture processing pipeline may retrieve texture data based on texture coordinates. When the memory access request represents a generic data access operation, the texture pipeline extracts a virtual address from the memory access request and then retrieves data based on the virtual address. The texture processing pipeline is also configured to cache generic data retrieved on behalf of a group of threads and to then invalidate that generic data when the group of threads exits.Type: GrantFiled: December 19, 2012Date of Patent: July 4, 2017Assignee: NVIDIA CorporationInventors: Brian Fahs, Eric T. Anderson, Nick Barrow-Williams, Shirish Gadre, Joel James McCormack, Bryon S. Nordquist, Nirmal Raj Saxena, Lacky V. Shah
-
Patent number: 9595075Abstract: Approaches are disclosed for performing memory access operations in a texture processing pipeline having a first portion configured to process texture memory access operations and a second portion configured to process non-texture memory access operations. A texture unit receives a memory access request. The texture unit determines whether the memory access request includes a texture memory access operation. If the memory access request includes a texture memory access operation, then the texture unit processes the memory access request via at least the first portion of the texture processing pipeline, otherwise, the texture unit processes the memory access request via at least the second portion of the texture processing pipeline. One advantage of the disclosed approach is that the same processing and cache memory may be used for both texture operations and load/store operations to various other address spaces, leading to reduced surface area and power consumption.Type: GrantFiled: September 26, 2013Date of Patent: March 14, 2017Assignee: NVIDIA CorporationInventors: Steven J. Heinrich, Eric T. Anderson, Jeffrey A. Bolz, Jonathan Dunaisky, Ramesh Jandhyala, Joel McCormack, Alexander L. Minkin, Bryon S. Nordquist, Poornachandra Rao
-
Patent number: 9348762Abstract: A tag unit configured to manage a cache unit includes a coalescer that implements a set hashing function. The set hashing function maps a virtual address to a particular content-addressable memory unit (CAM). The coalescer implements the set hashing function by splitting the virtual address into upper, middle, and lower portions. The upper portion is further divided into even-indexed bits and odd-indexed bits. The even-indexed bits are reduced to a single bit using a XOR tree, and the odd-indexed are reduced in like fashion. Those single bits are combined with the middle portion of the virtual address to provide a CAM number that identifies a particular CAM. The identified CAM is queried to determine the presence of a tag portion of the virtual address, indicating a cache hit or cache miss.Type: GrantFiled: December 19, 2012Date of Patent: May 24, 2016Assignee: NVIDIA CorporationInventors: Brian Fahs, Eric T. Anderson, Nick Barrow-Williams, Shirish Gadre, Joel James McCormack, Bryon S. Nordquist, Nirmal Raj Saxena, Lacky V. Shah
-
Patent number: 9292295Abstract: A system, method, and computer program product for generating flow-control signals for a processing pipeline is disclosed. The method includes the steps of generating, by a first pipeline stage, a delayed ready signal based on a downstream ready signal received from a second pipeline stage and a throttle disable signal. A downstream valid signal is generated by the first pipeline stage based on an upstream valid signal and the delayed ready signal. An upstream ready signal is generated by the first pipeline stage based on the delayed ready signal and the downstream valid signal.Type: GrantFiled: June 10, 2013Date of Patent: March 22, 2016Assignee: NVIDIA CorporationInventors: Philip Payman Shirvani, Peter Benjamin Sommers, Eric T. Anderson
-
Publication number: 20150097851Abstract: A texture processing pipeline is configured to store decoded texture data within a cache unit in order to expedite the processing of texture requests. When a texture request is processed, the texture processing pipeline queries the cache unit to determine whether the requested data is resident in the cache. If the data is not resident in the cache unit, a cache miss occurs. The texture processing pipeline then reads encoded texture data from global memory, decodes that data, and writes different portions of the decoded memory into the cache unit at specific locations according to a caching map. If the data is, in fact, resident in the cache unit, a cache hit occurs, and the texture processing pipeline then reads decoded portions of the requested texture data from the cache unit and combines those portions according to the caching map.Type: ApplicationFiled: October 9, 2013Publication date: April 9, 2015Applicant: NVIDIA CORPORATIONInventors: Eric T. ANDERSON, Poornachandra RAO
-
Publication number: 20150089284Abstract: Computer and graphics processing elements, connected generally in series, form a pipeline. Circuit elements known as di/dt throttles are inserted within the pipeline at strategic locations where the potential exists for data flow to transition from an idle state to a maximum data processing rate. The di/dt throttles gently ramp the rate of data flow from idle to a typical level. Disproportionate current draw and the consequent voltage droop are thus avoided, allowing an increased frequency of operation to be realized.Type: ApplicationFiled: September 20, 2013Publication date: March 26, 2015Applicant: NVIDIA CORPORATIONInventors: Philip Payman SHIRVANI, Peter SOMMERS, Eric T. ANDERSON
-
Publication number: 20150084975Abstract: Approaches are disclosed for performing memory access operations in a texture processing pipeline having a first portion configured to process texture memory access operations and a second portion configured to process non-texture memory access operations. A texture unit receives a memory access request. The texture unit determines whether the memory access request includes a texture memory access operation. If the memory access request includes a texture memory access operation, then the texture unit processes the memory access request via at least the first portion of the texture processing pipeline, otherwise, the texture unit processes the memory access request via at least the second portion of the texture processing pipeline. One advantage of the disclosed approach is that the same processing and cache memory may be used for both texture operations and load/store operations to various other address spaces, leading to reduced surface area and power consumption.Type: ApplicationFiled: September 26, 2013Publication date: March 26, 2015Applicant: NVIDIA CORPORATIONInventors: Steven J. HEINRICH, Eric T. ANDERSON, Jeffrey A. BOLZ, Jonathan DUNAISKY, Ramesh JANDHYALA, Joel MCCORMACK, Alexander L. MINKIN, Bryon S. NORDQUIST, Poornachandra RAO
-
Publication number: 20140365750Abstract: A system, method, and computer program product for generating flow-control signals for a processing pipeline is disclosed. The method includes the steps of generating, by a first pipeline stage, a delayed ready signal based on a downstream ready signal received from a second pipeline stage and a throttle disable signal. A downstream valid signal is generated by the first pipeline stage based on an upstream valid signal and the delayed ready signal. An upstream ready signal is generated by the first pipeline stage based on the delayed ready signal and the downstream valid signal.Type: ApplicationFiled: June 10, 2013Publication date: December 11, 2014Inventors: Philip Payman Shirvani, Peter Benjamin Sommers, Eric T. Anderson
-
Publication number: 20140173258Abstract: A texture processing pipeline can be configured to service memory access requests that represent texture data access operations or generic data access operations. When the texture processing pipeline receives a memory access request that represents a texture data access operation, the texture processing pipeline may retrieve texture data based on texture coordinates. When the memory access request represents a generic data access operation, the texture pipeline extracts a virtual address from the memory access request and then retrieves data based on the virtual address. The texture processing pipeline is also configured to cache generic data retrieved on behalf of a group of threads and to then invalidate that generic data when the group of threads exits.Type: ApplicationFiled: December 19, 2012Publication date: June 19, 2014Applicant: NVIDIA CORPORATIONInventors: Brian FAHS, Eric T. ANDERSON, Nick BARROW-WILLIAMS, Shirish GADRE, Joel James MCCORMACK, Bryon S. NORDQUIST, Nirmal Raj SAXENA, Lacky V. SHAH
-
Patent number: D977094Type: GrantFiled: April 23, 2021Date of Patent: January 31, 2023Assignee: Pacira CryoTech, Inc.Inventors: Erika Danielle Anderson-Bolden, Eric T. Johansson, Jeffrey N. Gamelsky