Patents by Inventor Victor W. Lee
Victor W. Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240079778Abstract: An electronic device may be provided with an antenna having a resonating element formed from a segment of peripheral conductive housing structures. A speaker may be aligned with first openings in the segment. A vent may be aligned with second openings in the segment. A connector may protrude through the segment. A trace combiner for the antenna may be patterned onto the speaker and may be coupled to the segment. Tuners for the antenna may be disposed on first and second flexible printed circuits that extend along opposing sides of the connector. The tuners may be controlled through the speaker. The second flexible printed circuit may extend along the vent. The vent may have a vent cowling with a cut-out region next to the tuner on the second flexible printed circuit.Type: ApplicationFiled: August 30, 2023Publication date: March 7, 2024Inventors: Enrique Ayala Vazquez, Ming-Ju Tsai, Yiren Wang, Yuan Tao, Hao Xu, Sidharath Jain, Haozhan Tian, Yuancheng Xu, Eric W. Bates, Peter A. Dvorak, Harlan S. Dannenberg, Rees S. Parker, Obinna O. Onyemepu, Victor C. Lee, Han Wang, Hongfei Hu
-
Publication number: 20240079777Abstract: An electronic device may be provided with an antenna having a resonating element formed from a segment of peripheral conductive housing structures. A speaker may be aligned with first openings in the segment. A vent may be aligned with second openings in the segment. A connector may protrude through the segment. A trace combiner for the antenna may be patterned onto the speaker and may be coupled to the segment. Tuners for the antenna may be disposed on first and second flexible printed circuits that extend along opposing sides of the connector. The tuners may be controlled through the speaker. The second flexible printed circuit may extend along the vent. The vent may have a vent cowling with a cut-out region next to the tuner on the second flexible printed circuit.Type: ApplicationFiled: August 30, 2023Publication date: March 7, 2024Inventors: Yiren Wang, Yuan Tao, Hao Xu, Hongfei Hu, Enrique Ayala Vazquez, Ming-Ju Tsai, Sidharath Jain, Haozhan Tian, Yuancheng Xu, Harlan S Dannenberg, Eric W Bates, Peter A Dvorak, Nicole E Cazares, Obinna O Onyemepu, Victor C Lee, Han Wang
-
Publication number: 20240079785Abstract: An electronic device may be provided with an antenna having a resonating element formed from a segment of peripheral conductive housing structures. A speaker may be aligned with first openings in the segment. A vent may be aligned with second openings in the segment. A connector may protrude through the segment. A trace combiner for the antenna may be patterned onto the speaker and may be coupled to the segment. Tuners for the antenna may be disposed on first and second flexible printed circuits that extend along opposing sides of the connector. The tuners may be controlled through the speaker. The second flexible printed circuit may extend along the vent. The vent may have a vent cowling with a cut-out region next to the tuner on the second flexible printed circuit.Type: ApplicationFiled: August 30, 2023Publication date: March 7, 2024Inventors: Yiren Wang, Yuan Tao, Hao Xu, Yuancheng Xu, Enrique Ayala Vazquez, Nikolaj P. Kammersgaard, Eric W. Bates, Peter A. Dvorak, Victor C. Lee, Han Wang
-
Patent number: 10884957Abstract: Techniques and mechanisms for performing in-memory computations with circuitry having a pipeline architecture. In an embodiment, various stages of a pipeline each include a respective input interface and a respective output interface, distinct from said input interface, to couple to different respective circuitry. These stages each further include a respective array of memory cells and circuitry to perform operations based on data stored by said array. A result of one such in-memory computation may be communicated from one pipeline stage to a respective next pipeline stage for use in further in-memory computations. Control circuitry, interconnect circuitry, configuration circuitry or other logic of the pipeline precludes operation of the pipeline as a monolithic, general-purpose memory device. In other embodiments, stages of the pipeline each provide a different respective layer of a neural network.Type: GrantFiled: October 15, 2018Date of Patent: January 5, 2021Assignee: Intel CorporationInventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor W. Lee, Abhishek Sharma, Huseyin E. Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, Ian Young
-
Patent number: 10775873Abstract: In an embodiment, a processor includes: a plurality of first cores to independently execute instructions, each of the plurality of first cores including a plurality of counters to store performance information; at least one second core to perform memory operations; and a power controller to receive performance information from at least some of the plurality of counters, determine a workload type executed on the processor based at least in part on the performance information, and based on the workload type dynamically migrate one or more threads from one or more of the plurality of first cores to the at least one second core for execution during a next operation interval. Other embodiments are described and claimed.Type: GrantFiled: February 28, 2019Date of Patent: September 15, 2020Assignee: Intel CorporationInventors: Victor W. Lee, Edward T. Grochowski, Daehyun Kim, Yuxin Bai, Sheng Li, Naveen K. Mellempudi, Dhiraj D. Kalamkar
-
Patent number: 10579378Abstract: An apparatus and method are described for executing instructions using a predicate register. For example, one embodiment of a processor comprises: a register set including a predicate register to store a set of predicate condition bits, the predicate condition bits specifying whether results of a particular predicated instruction sequence are to be retained or discarded; and predicate execution logic to execute a first predicate instruction to indicate a start of a new predicated instruction sequence by copying a condition value from a processor control register in the register set to the predicate register. In a further embodiment, the predicate condition bits in the predicate register are to be shifted in response to the first predicate instruction to free space within the predicate register for the new condition value associated with the new predicated instruction sequence.Type: GrantFiled: March 27, 2014Date of Patent: March 3, 2020Assignee: Intel CorporationInventors: Edward T. Grochowski, Victor W. Lee, Sergey A. Rozhkov, Boris A. Babayan
-
Publication number: 20190265777Abstract: In an embodiment, a processor includes: a plurality of first cores to independently execute instructions, each of the plurality of first cores including a plurality of counters to store performance information; at least one second core to perform memory operations; and a power controller to receive performance information from at least some of the plurality of counters, determine a workload type executed on the processor based at least in part on the performance information, and based on the workload type dynamically migrate one or more threads from one or more of the plurality of first cores to the at least one second core for execution during a next operation interval. Other embodiments are described and claimed.Type: ApplicationFiled: February 28, 2019Publication date: August 29, 2019Inventors: Victor W. Lee, Edward T. Grochowski, Daehyun Kim, Yuxin Bai, Sheng Li, Naveen K. Mellempudi, Dhiraj D. Kalamkar
-
Patent number: 10372450Abstract: Embodiments of systems, apparatuses, and methods for performing in a computer processor generation of a predicate mask based on vector comparison in response to a single instruction are described.Type: GrantFiled: July 11, 2017Date of Patent: August 6, 2019Assignee: Intel CorporationInventors: Victor W. Lee, Daehyun Kim, Tin-Fook Ngai, Jayashankar Bharadwaj, Albert Hartono, Sara Baghsorkhi, Nalini Vasudevan
-
Patent number: 10234930Abstract: In an embodiment, a processor includes: a plurality of first cores to independently execute instructions, each of the plurality of first cores including a plurality of counters to store performance information; at least one second core to perform memory operations; and a power controller to receive performance information from at least some of the plurality of counters, determine a workload type executed on the processor based at least in part on the performance information, and based on the workload type dynamically migrate one or more threads from one or more of the plurality of first cores to the at least one second core for execution during a next operation interval. Other embodiments are described and claimed.Type: GrantFiled: February 13, 2015Date of Patent: March 19, 2019Assignee: Intel CorporationInventors: Victor W. Lee, Edward T. Grochowski, Daehyun Kim, Yuxin Bai, Sheng Li, Naveen K. Mellempudi, Dhiraj D. Kalamkar
-
Publication number: 20190057050Abstract: Techniques and mechanisms for performing in-memory computations with circuitry having a pipeline architecture. In an embodiment, various stages of a pipeline each include a respective input interface and a respective output interface, distinct from said input interface, to couple to different respective circuitry. These stages each further include a respective array of memory cells and circuitry to perform operations based on data stored by said array. A result of one such in-memory computation may be communicated from one pipeline stage to a respective next pipeline stage for use in further in-memory computations. Control circuitry, interconnect circuitry, configuration circuitry or other logic of the pipeline precludes operation of the pipeline as a monolithic, general-purpose memory device. In other embodiments, stages of the pipeline each provide a different respective layer of a neural network.Type: ApplicationFiled: October 15, 2018Publication date: February 21, 2019Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor W. Lee, Abhishek Sharma, Huseyin E. Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, Ian Young
-
Publication number: 20190057727Abstract: Techniques and mechanisms for configuring a memory device to perform a sequence of in-memory computations. In an embodiment, a memory device includes a memory array and circuitry, coupled thereto, to perform data computations based on the data stored at the memory array. Based on instructions received at the memory device, control circuitry is configured to enable an automatic performance of a sequence of operations. In another embodiment, the memory device is coupled in an in-series arrangement of other memory devices to provide a pipeline circuit architecture. The memory devices each function as a respective stage of the pipeline circuit architecture, where the stages each perform respective in-memory computations. Some or all such stages each provide a different respective layer of a neural network.Type: ApplicationFiled: October 15, 2018Publication date: February 21, 2019Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor W. Lee, Abhishek Sharma, Huseyin E. Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, Ian Young
-
Patent number: 10152325Abstract: Instructions and logic provide pushing buffer copy and store functionality. Some embodiments include a first hardware thread or processing core, and a second hardware thread or processing core, a cache to store cache coherent data in a cache line for a shared memory address accessible by the second hardware thread or processing core. Responsive to decoding an instruction specifying a source data operand, said shared memory address as a destination operand, and one or more owner of said shared memory address, one or more execution units copy data from the source data operand to the cache coherent data in the cache line for said shared memory address accessible by said second hardware thread or processing core in the cache when said one or more owner includes said second hardware thread or processing core.Type: GrantFiled: February 7, 2017Date of Patent: December 11, 2018Assignee: Intel CorporationInventors: Christopher J. Hughes, Changkyu Kim, Daehyun Kim, Victor W. Lee, Jong Soo Park
-
Patent number: 10146286Abstract: In one embodiment, A processor includes a logic to receive performance monitoring information from at least some of a plurality of cores and determine, according to a power management model, a performance state for one or more of the plurality of cores based on the performance monitoring information, and a second logic to receive the performance monitoring information and dynamically update the power management model according to a reinforcement learning process. Other embodiments are described and claimed.Type: GrantFiled: January 14, 2016Date of Patent: December 4, 2018Assignee: Intel CorporationInventors: Victor W. Lee, Yuxin Bai
-
Patent number: 9921832Abstract: A vector reduction instruction with non-unit strided access pattern is received and executed by the execution circuitry of a processor. In response to the instruction, the execution circuitry performs an associative reduction operation on data elements of a first vector register. Based on values of the mask register and a current element position being processed, the execution circuitry sequentially sets one or more data elements of the first vector register to a result, which is generated by the associative reduction operation applied to both a previous data element of the first vector register and a data clement of a third vector register. The previous data element is located more than one element position away from the current element position.Type: GrantFiled: December 28, 2012Date of Patent: March 20, 2018Assignee: Intel CorporationInventors: Albert Hartono, Jayashankar Bharadwaj, Nalini Vasudevan, Sara S. Baghsorkhi, Victor W. Lee, Daehyun Kim
-
Publication number: 20180067743Abstract: Embodiments of systems, apparatuses, and methods for performing in a computer processor generation of a predicate mask based on vector comparison in response to a single instruction are described.Type: ApplicationFiled: July 11, 2017Publication date: March 8, 2018Applicant: Intel CorporationInventors: Victor W. LEE, Daehyun KIM, Tin-Fook NGAI, Jayashankar BHARADWAJ, Albert HARTONO, Sara BAGHSORKHI, Nalini VASUDEVAN
-
Patent number: 9910481Abstract: In an embodiment, a processor a plurality of cores to independently execute instructions, the cores including a plurality of counters to store performance information, and a power controller coupled to the plurality of cores, the power controller having a logic to receive performance information from at least some of the plurality of counters, determine a number of cores to be active and a performance state for the number of cores for a next operation interval, based at least in part on the performance information and model information, and cause the number of cores to be active during the next operation interval, the performance information associated with execution of a workload on one or more of the plurality of cores. Other embodiments are described and claimed.Type: GrantFiled: February 13, 2015Date of Patent: March 6, 2018Assignee: Intel CorporationInventors: Victor W. Lee, Daehyun Kim, Yuxin Bai, Shihao Ji, Sheng Li, Dhiraj D. Kalamkar, Naveen K. Mellempudi
-
Patent number: 9898266Abstract: Loop vectorization methods and apparatus are disclosed. An example method includes prior to executing an original loop having iterations, analyzing, via a processor, the iterations of the original loop, identifying a dependency between a first one of the iterations of the original loop and a second one of the iterations of the original loop, after identifying the dependency, vectorizing a first group of the iterations of the original loop based on the identified dependency to form a vectorization loop, and setting a dynamic adjustment value of the vectorization loop based on the identified dependency.Type: GrantFiled: January 25, 2016Date of Patent: February 20, 2018Assignee: Intel CorporationInventors: Nalini Vasudevan, Jayashankar Bharadwaj, Christopher J. Hughes, Milind B. Girkar, Mark J. Charney, Robert Valentine, Victor W. Lee, Daehyun Kim, Albert Hartono, Sara S. Baghsorkhi
-
Publication number: 20180004517Abstract: An apparatus and method for propagating conditionally evaluated values are disclosed. For example, a method according to one embodiment comprises: reading each value contained in an input mask register, each value being a true value or a false value and having a bit position associated therewith; for each true value read from the input mask register, generating a first result containing the bit position of the true value; for each false value read from the input mask register following the first true value, adding the vector length of the input mask register to a bit position of the last true value read from the input mask register to generate a second result; and storing each of the first results and second results in bit positions of an output register corresponding to the bit positions read from the input mask register.Type: ApplicationFiled: September 18, 2017Publication date: January 4, 2018Inventors: Jayashankar BHARADWAJ, Nalini VASUDEVAN, Victor W. LEE, Daehyun KIM, Albert HARTONO, Sara S. BAGHSORKHI
-
Patent number: 9798541Abstract: An apparatus and method for propagating conditionally evaluated values are disclosed. For example, a method according to one embodiment comprises: reading each value contained in an input mask register, each value being a true value or a false value and having a bit position associated therewith; for each true value read from the input mask register, generating a first result containing the bit position of the true value; for each false value read from the input mask register following the first true value, adding the vector length of the input mask register to a bit position of the last true value read from the input mask register to generate a second result; and storing each of the first results and second results in bit positions of an output register corresponding to the bit positions read from the input mask register.Type: GrantFiled: December 23, 2011Date of Patent: October 24, 2017Assignee: INTEL CORPORATIONInventors: Jayashankar Bharadwaj, Nalini Vasudevan, Victor W. Lee, Daehyun Kim, Albert Hartono, Sara S. Baghsorkhi
-
Publication number: 20170242700Abstract: Instructions and logic provide pushing buffer copy and store functionality. Some embodiments include a first hardware thread or processing core, and a second hardware thread or processing core, a cache to store cache coherent data in a cache line for a shared memory address accessible by the second hardware thread or processing core. Responsive to decoding an instruction specifying a source data operand, said shared memory address as a destination operand, and one or more owner of said shared memory address, one or more execution units copy data from the source data operand to the cache coherent data in the cache line for said shared memory address accessible by said second hardware thread or processing core in the cache when said one or more owner includes said second hardware thread or processing core.Type: ApplicationFiled: February 7, 2017Publication date: August 24, 2017Inventors: Christopher J. Hughes, Changkyu Kim, Daehyun Kim, Victor W. Lee, Jong Soo Park