Patents by Inventor Zhongying Zhang

Zhongying Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11956585
    Abstract: A Bluetooth earphone includes an antenna and a circuit board. The circuit board includes a first grounding branch and a second grounding branch. The first grounding branch is connected in series to a first switch, and the second grounding branch is connected in series to a second switch. When the first switch is on, the first grounding branch serves as a current return path of the antenna. When the second switch is on, the second grounding branch serves as a current return path of the antenna. By controlling the on or off state of the first switch and the second switch, the Bluetooth earphone can switch ground structures of the antenna and select different current return paths for the antenna, to switch radiation patterns of the antenna.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: April 9, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Junhong Zhang, Yi Fan, Zhongying Long
  • Patent number: 10860319
    Abstract: An apparatus and method for early page address prediction. For example, one embodiment of a processor comprises: an instruction fetch circuit to fetch a load instruction; a decoder to decode the load instruction; execution circuitry to execute the load instruction to perform a load operation, the execution circuitry including an address generation unit (AGU) to generate an effective address to be used for the load operation; and early page prediction (EPP) circuitry to use one or more attributes associated with the load instruction to predict a physical page address for the load instruction simultaneously with the AGU generating the effective address and/or prior to generation of the effective address.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: December 8, 2020
    Assignee: Intel Corporation
    Inventors: Mark Dechene, Manjunath Shevgoor, Faruk Guvenilir, Zhongying Zhang, Jonathan Perry
  • Patent number: 10853078
    Abstract: A processor includes a store buffer to store store instructions to be processed to store data in main memory, a load buffer to store load instructions to be processed to load data from main memory, and a loop invariant code motion (LICM) protection structure coupled to the store buffer and the load buffer. The LPT tracks information to compare an address of a store or snoop microoperation with entries in the LICM and re-loads a load microoperation of a matching entry.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: December 1, 2020
    Assignee: INTEL CORPORATION
    Inventors: Vineeth Mekkat, Mark Dechene, Zhongying Zhang, John Faistl, Janghaeng Lee, Hou-Jen Ko, Sebastian Winkel, Oleg Margulis
  • Publication number: 20200310798
    Abstract: An integrated circuit with support for memory atomicity comprises a processor core. The processor core comprises a data cache unit (DCU), a store buffer (SB), a retirement unit, and memory atomicity facilities. The memory atomicity facilities are configured, when engaged, to (a) add an SB entry to the SB, in response to the processor core executing a store instruction that is part of an atomic region of code; (b) cause the SB entry to become senior, in response to the retirement unit retiring the store instruction; and (c) cause the SB entry to become walk enabled, in response to the retirement unit committing a transaction associated with the atomic region. Other embodiments are described and claimed.
    Type: Application
    Filed: March 28, 2019
    Publication date: October 1, 2020
    Inventors: Manjunath Shevgoor, Mark Joseph Dechene, Vineeth Mekkat, Jason Michael Agron, Zhongying Zhang
  • Publication number: 20200201645
    Abstract: A processor includes a store buffer to store store instructions to be processed to store data in main memory, a load buffer to store load instructions to be processed to load data from main memory, and a loop invariant code motion (LICM) protection structure coupled to the store buffer and the load buffer. The LPT tracks information to compare an address of a store or snoop microoperation with entries in the LICM and re-loads a load microoperation of a matching entry.
    Type: Application
    Filed: December 21, 2018
    Publication date: June 25, 2020
    Inventors: Vineeth MEKKAT, Mark DECHENE, Zhongying ZHANG, John FAISTL, Janghaeng LEE, Hou-Jen KO, Sebastian WINKEL, Oleg MARGULIS
  • Publication number: 20190303150
    Abstract: An apparatus and method for early page address prediction. For example, one embodiment of a processor comprises: an instruction fetch circuit to fetch a load instruction; a decoder to decode the load instruction; execution circuitry to execute the load instruction to perform a load operation, the execution circuitry including an address generation unit (AGU) to generate an effective address to be used for the load operation; and early page prediction (EPP) circuitry to use one or more attributes associated with the load instruction to predict a physical page address for the load instruction simultaneously with the AGU generating the effective address and/or prior to generation of the effective address.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 3, 2019
    Inventors: Mark Dechene, Manjunath Shevgoor, Faruk Guvenilir, Zhongying Zhang, Jonathan Perry
  • Patent number: 10228956
    Abstract: In one implementation, a processing device is provided that includes a memory to store instructions and a processor core to execute the instructions. The processor core is to receive a sequence of instructions reordered by a binary translator for execution. A first load of the sequence of instructions is identified. The first load references a memory location that stores a data item to be loaded. An occurrence of a second load is detected. The second load to access the memory location subsequent to an execution of the first load instruction. A protection field in the first load is enabled based on the detected occurrence of the second load. The enabled protection field indicates that the first load is to be checked for an aliasing associated with the memory location with respect to a subsequent store instruction. The second load is eliminated based on the enabled of the protection field.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: March 12, 2019
    Assignee: Intel Corporation
    Inventors: Vineeth Mekkat, Mark J. Dechene, Zhongying Zhang, Jason Agron, Sebastian Winkel
  • Publication number: 20180095765
    Abstract: In one implementation, a processing device is provided that includes a memory to store instructions and a processor core to execute the instructions. The processor core is to receive a sequence of instructions reordered by a binary translator for execution. A first load of the sequence of instructions is identified. The first load references a memory location that stores a data item to be loaded. An occurrence of a second load is detected. The second load to access the memory location subsequent to an execution of the first load instruction. A protection field in the first load is enabled based on the detected occurrence of the second load. The enabled protection field indicates that the first load is to be checked for an aliasing associated with the memory location with respect to a subsequent store instruction. The second load is eliminated based on the enabled of the protection field.
    Type: Application
    Filed: September 30, 2016
    Publication date: April 5, 2018
    Inventors: Vineeth Mekkat, Mark J. Dechene, Zhongying Zhang, Jason Agron, Sebastian Winkel
  • Patent number: 9336156
    Abstract: A processing device and method for cache control including tracking updates to the line state of a cache superline are described. In response to a request pertaining to a superline, a cache controller of the processing device can perform one or more read-modify-write (RMW) operations to (a) a line state vector of a line state array and (b) a counter of the line state array. Based on a determination that one or more requests to the superline have completed, the line state vector from the line state array can be written to a tag array. The cache controller can track pending line state updates to a superline outside of the tag array, and a line state update can occur in the cache controller, rather than awaiting completion of all outstanding operations on a superline. Updates to multiple line states can be maintained simultaneously, and up-to-date ECCs computed.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: May 10, 2016
    Assignee: Intel Corporation
    Inventors: Zhongying Zhang, Erik G. Hallnor, Stanley S. Kulick, Jeffrey L. Miller
  • Patent number: 9311241
    Abstract: A method is described that includes performing the following for a transactional operation in response to a request from a processing unit that is directed to a cache identifying a cache line. Reading the cache line, and, if the cache line is in a Modified cache coherency protocol state, forwarding the cache line to circuitry that will cause the cache line to be written to deeper storage, and, changing another instance of the cache line that is available to the processing unit for the transactional operation to an Exclusive cache coherency state.
    Type: Grant
    Filed: December 29, 2012
    Date of Patent: April 12, 2016
    Assignee: Intel Corporation
    Inventors: Ravi Rajwar, Robert Chappell, Zhongying Zhang, Jason Bessette
  • Patent number: 9298632
    Abstract: In one embodiment, a cache memory can store a plurality of cache lines, each including a write-set field to store a write-set indicator to indicate whether data has been speculatively written during a transaction of a transactional memory, and a read-set field to store a plurality of read-set indicators each to indicate whether a corresponding thread has read the data before the transaction has committed. A compression filter associated with the cache memory includes a first filter storage to store a representation of a cache line address of a cache line read by a first thread of threads before the transaction has committed. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 28, 2012
    Date of Patent: March 29, 2016
    Assignee: Intel Corporation
    Inventors: Robert S. Chappell, Ravi Rajwar, Zhongying Zhang, Jason A. Bessette
  • Publication number: 20140281251
    Abstract: Technologies for tracking updates to the line state of a cache superline are described. In response to a request pertaining to a superline, one or more read-modify-write (RMW) operations to (a) a line state vector of a line state array and (b) a counter of the line state array can be performed. Based on a determination that one or more requests to the superline have completed, the line state vector from the line state array can be written to a tag array.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Inventors: Zhongying Zhang, Erik G. Hallnor, Stanley S. Kulick, Jeffrey L. Miller
  • Publication number: 20140189241
    Abstract: A method is described that includes performing the following for a transactional operation in response to a request from a processing unit that is directed to a cache identifying a cache line.
    Type: Application
    Filed: December 29, 2012
    Publication date: July 3, 2014
    Inventors: Ravi RAJWAR, Robert CHAPPELL, Zhongying ZHANG, Jason BESSETTE
  • Publication number: 20140006698
    Abstract: In one embodiment, a cache memory can store a plurality of cache lines, each including a write-set field to store a write-set indicator to indicate whether data has been speculatively written during a transaction of a transactional memory, and a read-set field to store a plurality of read-set indicators each to indicate whether a corresponding thread has read the data before the transaction has committed. A compression filter associated with the cache memory includes a first filter storage to store a representation of a cache line address of a cache line read by a first thread of threads before the transaction has committed. Other embodiments are described and claimed.
    Type: Application
    Filed: June 28, 2012
    Publication date: January 2, 2014
    Inventors: Robert S. Chappell, Ravi Rajwar, Zhongying Zhang, Jason A. Bessette
  • Patent number: 7603527
    Abstract: Methods and apparatus for resolving false dependencies associated with speculatively executing load instructions in a processor core are described. In one embodiment, physical addresses of a load operation and a store operation are compared in response to a determination that the load operation may be potentially dependent on the store operation. Other embodiments are also described.
    Type: Grant
    Filed: September 29, 2006
    Date of Patent: October 13, 2009
    Assignee: Intel Corporation
    Inventors: Sebastien Hily, Zhongying Zhang, Per Hammarlund
  • Publication number: 20080082765
    Abstract: Methods and apparatus for resolving false dependencies associated with speculatively executing load instructions in a processor core are described. In one embodiment, physical addresses of a load operation and a store operation are compared in response to a determination that the load operation may be potentially dependent on the store operation. Other embodiments are also described.
    Type: Application
    Filed: September 29, 2006
    Publication date: April 3, 2008
    Inventors: Sebastien Hily, Zhongying Zhang, Per Hammarlund
  • Publication number: 20080072019
    Abstract: A technique to filter bogus instructions from a processor pipeline. At least one embodiment of the invention detects a bogus event, removes only instructions from the processor corresponding to the bogus event without affecting instructions not corresponding to the bogus event.
    Type: Application
    Filed: September 19, 2006
    Publication date: March 20, 2008
    Inventors: Avinash Sodani, Ranjani Iyer, Sean Mirkes, Sebastien Hily, David Koufaty, Stephan Jourdan, Zhongying Zhang
  • Publication number: 20080059753
    Abstract: Methods and apparatus to redispatch an operation for execution in a processor are described. In one embodiment, a virtual address corresponding to a store instruction may be reselected for translation into a physical address in response to remaining unselected during a previous selection process. Other embodiments are also described.
    Type: Application
    Filed: August 30, 2006
    Publication date: March 6, 2008
    Inventors: Sebastien Hily, Zhongying Zhang, Ranjani Iyer, Stephan Jourdan, Per Hammarlund
  • Publication number: 20050216673
    Abstract: A method and device for determining an attribute associated with a locked load instruction and selecting a lock protocol based upon the attribute of the locked load instruction. Also disclosed is a method for concurrently executing the respective lock sequences associated with multiple threads of a processing device.
    Type: Application
    Filed: May 17, 2005
    Publication date: September 29, 2005
    Inventors: Harish Kumar, Aravindh Baktha, Mike Upton, KS Venkatraman, Herbert Hum, Zhongying Zhang
  • Patent number: 6922745
    Abstract: A method and device for determining an attribute associated with a locked load instruction and selecting a lock protocol based upon the attribute of the locked load instruction. Also disclosed is a method for concurrently executing the respective lock sequences associated with multiple threads of a processing device.
    Type: Grant
    Filed: May 2, 2002
    Date of Patent: July 26, 2005
    Assignee: Intel Corporation
    Inventors: Harish Kumar, Aravindh Baktha, Mike D. Upton, KS Venkatraman, Herbert H. Hum, Zhongying Zhang