Patents by Inventor John Grant Bennett
John Grant Bennett has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11775442Abstract: Systems and methods related to a memory system with a predictable read latency from media with a long write latency are described. An example memory system includes an array of tiles configured to store data corresponding to a cache line associated with a host. The memory system further includes control logic configured to, in response to a write command from a host, initiate writing of a first cache line to a first tile in a first row of the tiles, a second cache line to a second tile in a second row of the tiles, a third cache line to a third tile in a third row of the tiles, and a fourth cache line in a fourth row of the tiles. The control logic is configured to, in response to a read command from the host, initiate reading of data stored in an entire row of tiles.Type: GrantFiled: January 25, 2022Date of Patent: October 3, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Monish Shantilal Shah, John Grant Bennett
-
Patent number: 11664999Abstract: Systems and methods related to ensuring the integrity of data stored in a memory by using a watermark are described. An example method in a system including a processor and a memory may include receiving data for storage at an address in the memory. The method may further include after encoding the data with an error correction code to generate intermediate data having a first number of bits, reversibly altering the intermediate data with a watermark to generate watermarked data for storage in the memory, where the watermark is generated by applying a cryptographic function to a user key and the address, and where the watermarked data has a second number of bits equal to the first number of bits.Type: GrantFiled: October 16, 2020Date of Patent: May 30, 2023Assignee: Microsoft Technology Licensing, LLCInventors: John Grant Bennett, Greg Zaverucha
-
Publication number: 20230112352Abstract: A memory device comprises a memory array, a counter unit, and a service unit. The memory array comprises cells arranged in rows and columns, wherein a subset of the cells in each of the rows holds a row activation count for each row. The counter unit, in response to an activation of the row caused by a read operation on at least a portion of the row, increments the row activation count for at least one of the rows prior to a completion of the read operation, and writes-back the row activation count in an incremented state to the subset of the cells in the row that held the row activation count prior to the activation. The service unit is coupled to the counter unit and performs a service with respect to one or more other rows, offset from the row, in response to the row activation count associated with the row satisfying service criteria.Type: ApplicationFiled: December 12, 2022Publication date: April 13, 2023Inventors: John Grant BENNETT, Stefan SAROIU
-
Patent number: 11527280Abstract: A memory device comprises a memory array, a counter unit, and a service unit. The memory array comprises cells arranged in rows and columns, wherein a subset of the cells in each of the rows holds a row activation count for each row. The counter unit, in response to an activation of the row caused by a read operation on at least a portion of the row, increments the row activation count for at least one of the rows prior to a completion of the read operation, and writes-back the row activation count in an incremented state to the subset of the cells in the row that held the row activation count prior to the activation. The service unit is coupled to the counter unit and performs a service with respect to one or more other rows, offset from the row, in response to the row activation count associated with the row satisfying service criteria.Type: GrantFiled: January 22, 2021Date of Patent: December 13, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: John Grant Bennett, Stefan Saroiu
-
Publication number: 20220147461Abstract: Systems and methods related to a memory system with a predictable read latency from media with a long write latency are described. An example memory system includes an array of tiles configured to store data corresponding to a cache line associated with a host. The memory system further includes control logic configured to, in response to a write command from a host, initiate writing of a first cache line to a first tile in a first row of the tiles, a second cache line to a second tile in a second row of the tiles, a third cache line to a third tile in a third row of the tiles, and a fourth cache line in a fourth row of the tiles. The control logic is configured to, in response to a read command from the host, initiate reading of data stored in an entire row of tiles.Type: ApplicationFiled: January 25, 2022Publication date: May 12, 2022Inventors: Monish Shantilal SHAH, John Grant BENNETT
-
Publication number: 20220123940Abstract: Systems and methods related to ensuring the integrity of data stored in a memory by using a watermark are described. An example method in a system including a processor and a memory may include receiving data for storage at an address in the memory. The method may further include after encoding the data with an error correction code to generate intermediate data having a first number of bits, reversibly altering the intermediate data with a watermark to generate watermarked data for storage in the memory, where the watermark is generated by applying a cryptographic function to a user key and the address, and where the watermarked data has a second number of bits equal to the first number of bits.Type: ApplicationFiled: October 16, 2020Publication date: April 21, 2022Inventors: John Grant BENNETT, Greg ZAVERUCHA
-
Patent number: 11269779Abstract: Systems and methods related to a memory system with a predictable read latency from media with a long write latency are described. An example memory system includes an array of tiles configured to store data corresponding to a cache line associated with a host. The memory system further includes control logic configured to, in response to a write command from a host, initiate writing of a first cache line to a first tile in a first row of the tiles, a second cache line to a second tile in a second row of the tiles, a third cache line to a third tile in a third row of the tiles, and a fourth cache line in a fourth row of the tiles. The control logic is configured to, in response to a read command from the host, initiate reading of data stored in an entire row of tiles.Type: GrantFiled: May 27, 2020Date of Patent: March 8, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Monish Shantilal Shah, John Grant Bennett
-
Publication number: 20220068348Abstract: A memory device comprises a memory array, a counter unit, and a service unit. The memory array comprises cells arranged in rows and columns, wherein a subset of the cells in each of the rows holds a row activation count for each row. The counter unit, in response to an activation of the row caused by a read operation on at least a portion of the row, increments the row activation count for at least one of the rows prior to a completion of the read operation, and writes-back the row activation count in an incremented state to the subset of the cells in the row that held the row activation count prior to the activation. The service unit is coupled to the counter unit and performs a service with respect to one or more other rows, offset from the row, in response to the row activation count associated with the row satisfying service criteria.Type: ApplicationFiled: January 22, 2021Publication date: March 3, 2022Inventors: John Grant BENNETT, Stefan SAROIU
-
Publication number: 20210374066Abstract: Systems and methods related to a memory system with a predictable read latency from media with a long write latency are described. An example memory system includes an array of tiles configured to store data corresponding to a cache line associated with a host. The memory system further includes control logic configured to, in response to a write command from a host, initiate writing of a first cache line to a first tile in a first row of the tiles, a second cache line to a second tile in a second row of the tiles, a third cache line to a third tile in a third row of the tiles, and a fourth cache line in a fourth row of the tiles. The control logic is configured to, in response to a read command from the host, initiate reading of data stored in an entire row of tiles.Type: ApplicationFiled: May 27, 2020Publication date: December 2, 2021Inventors: Monish Shantilal SHAH, John Grant BENNETT
-
Patent number: 10735025Abstract: A data compression system includes a memory to store a plurality of predetermined prefixes corresponding to a plurality of classes of data. A classifying module is configured to receive data, receive a class of the data, and select a prefix to compress the data from the plurality of predetermined prefixes based on the data and the class of the data. A compressing module is configured to compress the data using the prefix. A header generating module is configured to generate a header including an indication of the prefix used to compress the data, and to output the header and the compressed data for storage or transmission. Using the prefix from the predetermined prefixes to compress the data eliminates an overhead of fetching the prefix from outside the data compression system.Type: GrantFiled: March 2, 2018Date of Patent: August 4, 2020Assignee: Microsoft Technology Licensing, LLCInventors: John Grant Bennett, Susan Elizabeth Carrie, Ravi shankar Reddy Kolli
-
Patent number: 10713085Abstract: The described technology provides a system and method for sequential execution of one or more operation segments in an asynchronous event driven architecture. One or more operation segments may be associated and grouped into an activity of operation segments. The operation segments of an activity may be sequentially executed based on a queue structure of references to operation segments stored in a context memory associated with the activity. Any initiated operation segment may be placed on the queue structure upon completion of an associated I/O action.Type: GrantFiled: August 21, 2018Date of Patent: July 14, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Chen Fu, John Grant Bennett
-
Patent number: 10649925Abstract: A portion of the memory space, supported by memory chips that are being controlled by a memory controller logic, can be set aside and read requests directed to memory addresses within that portion can be redirected, by the memory controller logic, to other memory addresses at which is stored the data equivalent to the internal data of the memory controller logic that the memory controller logic seeks to return, thereby enabling the memory controller logic to indirectly return data to processes executing on the host computing device. Additionally, requests to write data to specific memory addresses, including memory addresses that can be within the set aside portion, can be interpreted, by the memory controller logic, as commands that the memory controller logic is to perform, and which impact its own internal data including commands to reset values, or start or end data collection, or other like commands.Type: GrantFiled: May 16, 2018Date of Patent: May 12, 2020Assignee: Microsoft Technology Licensing, LLCInventor: John Grant Bennett
-
Publication number: 20190354492Abstract: A portion of the memory space, supported by memory chips that are being controlled by a memory controller logic, can be set aside and read requests directed to memory addresses within that portion can be redirected, by the memory controller logic, to other memory addresses at which is stored the data equivalent to the internal data of the memory controller logic that the memory controller logic seeks to return, thereby enabling the memory controller logic to indirectly return data to processes executing on the host computing device. Additionally, requests to write data to specific memory addresses, including memory addresses that can be within the set aside portion, can be interpreted, by the memory controller logic, as commands that the memory controller logic is to perform, and which impact its own internal data including commands to reset values, or start or end data collection, or other like commands.Type: ApplicationFiled: May 16, 2018Publication date: November 21, 2019Inventor: John Grant BENNETT
-
Publication number: 20190273508Abstract: A data compression system includes a memory to store a plurality of predetermined prefixes corresponding to a plurality of classes of data. A classifying module is configured to receive data, receive a class of the data, and select a prefix to compress the data from the plurality of predetermined prefixes based on the data and the class of the data. A compressing module is configured to compress the data using the prefix. A header generating module is configured to generate a header including an indication of the prefix used to compress the data, and to output the header and the compressed data for storage or transmission. Using the prefix from the predetermined prefixes to compress the data eliminates an overhead of fetching the prefix from outside the data compression system.Type: ApplicationFiled: March 2, 2018Publication date: September 5, 2019Inventors: John Grant BENNETT, Susan Elizabeth CARRIE, Ravi shankar Reddy KOLLI
-
Patent number: 10198397Abstract: Two computing devices utilizing remote direct memory access establish a send ring buffer on a sending computer and a receive ring buffer on a receiving computer that mirror one another. A message is copied into the ring buffer on the sending computer and a write edge pointer is updated to identify its end. The message is copied, by the sending computer, from its ring buffer into a ring buffer on the receiving computer. A process executing on the receiving computer periodically checks, at its write edge pointer, and, upon detecting the new message's header, it updates the location identified by the write edge pointer. Once the new message is copied out of the ring buffer at the receiving computer, a trailing edge pointer is updated and a process executing at the sending computer monitors the trailing edge pointer of the receiving computer and updates its own trailing edge pointer accordingly.Type: GrantFiled: November 18, 2016Date of Patent: February 5, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Chen Fu, John Grant Bennett
-
Publication number: 20180357095Abstract: The described technology provides a system and method for sequential execution of one or more operation segments in an asynchronous event driven architecture. One or more operation segments may be associated and grouped into an activity of operation segments. The operation segments of an activity may be sequentially executed based on a queue structure of references to operation segments stored in a context memory associated with the activity. Any initiated operation segment may be placed on the queue structure upon completion of an associated I/O action.Type: ApplicationFiled: August 21, 2018Publication date: December 13, 2018Inventors: Chen FU, John Grant BENNETT
-
Patent number: 10067786Abstract: The described technology provides a system and method for sequential execution of one or more operation segments in an asynchronous event driven architecture. One or more operation segments may be associated and grouped into an activity of operation segments. The operation segments of an activity may be sequentially executed based on a queue structure of references to operation segments stored in a context memory associated with the activity. Any initiated operation segment may be placed on the queue structure upon completion of an associated I/O action.Type: GrantFiled: June 2, 2016Date of Patent: September 4, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Chen Fu, John Grant Bennett
-
Publication number: 20180143939Abstract: Two computing devices utilizing remote direct memory access establish a send ring buffer on a sending computer and a receive ring buffer on a receiving computer that mirror one another. A message is copied into the ring buffer on the sending computer and a write edge pointer is updated to identify its end. The message is copied, by the sending computer, from its ring buffer into a ring buffer on the receiving computer. A process executing thereon periodically checks, at its write edge pointer, and, upon detecting the new message's header, it updates the location identified by the write edge pointer. Once the new message is copied out of the ring buffer at the receiving computer, a trailing edge pointer is updated and a process executing at the sending computer monitors the trailing edge pointer of the receiving computer and updates its own trailing edge pointer accordingly.Type: ApplicationFiled: November 18, 2016Publication date: May 24, 2018Inventors: Chen Fu, John Grant Bennett
-
Publication number: 20170351540Abstract: The described technology provides a system and method for sequential execution of one or more operation segments in an asynchronous event driven architecture. One or more operation segments may be associated and grouped into an activity of operation segments. The operation segments of an activity may be sequentially executed based on a queue structure of references to operation segments stored in a context memory associated with the activity. Any initiated operation segment may be placed on the queue structure upon completion of an associated I/O action.Type: ApplicationFiled: June 2, 2016Publication date: December 7, 2017Inventors: Chen Fu, John Grant Bennett
-
Publication number: 20150003796Abstract: The technology provides embodiments for a waveguide including gaps which turn the direction of light. Each of a plurality of planes located within a waveguide includes a group of gaps so that each of the gapped planes partially reflects out of the waveguide light within a first angle range and transmits down the waveguide light received within a second angle range. In some examples, the waveguide is formed by joining optically transparent sections, and each group of gaps is formed in a surface of each optically transparent section which becomes a joining surface when bonded with an abutting all flat surface of an adjacent section. The waveguide may be used in displays, and in particular in near-eye displays (NED)s.Type: ApplicationFiled: June 27, 2013Publication date: January 1, 2015Inventor: John Grant Bennett