Patents by Inventor Gokce Keskin

Gokce Keskin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240070926
    Abstract: Embodiments are generally directed to compression in machine learning and deep learning processing. An embodiment of an apparatus for compression of untyped data includes a graphical processing unit (GPU) including a data compression pipeline, the data compression pipeline including a data port coupled with one or more shader cores, wherein the data port is to allow transfer of untyped data without format conversion, and a 3D compression/decompression unit to provide for compression of untyped data to be stored to a memory subsystem and decompression of untyped data from the memory subsystem.
    Type: Application
    Filed: September 13, 2023
    Publication date: February 29, 2024
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Ben Ashbaugh, Prasoonkumar Surti, Pradeep Ramani, Rama Harihara, Jerin C. Justin, Jing Huang, Xiaoming Cui, Timothy B. Costa, Ting Gong, Elmoustapha Ould-ahmed-vall, Kumar Balasubramanian, Anil Thomas, Oguz H. Elibol, Jayaram Bobba, Guozhong Zhuang, Bhavani Subramanian, Gokce Keskin, Chandrasekaran Sakthivel, Rajesh Poornachandran
  • Patent number: 11798198
    Abstract: Embodiments are generally directed to compression in machine learning and deep learning processing. An embodiment of an apparatus for compression of untyped data includes a graphical processing unit (GPU) including a data compression pipeline, the data compression pipeline including a data port coupled with one or more shader cores, wherein the data port is to allow transfer of untyped data without format conversion, and a 3D compression/decompression unit to provide for compression of untyped data to be stored to a memory subsystem and decompression of untyped data from the memory subsystem.
    Type: Grant
    Filed: January 10, 2023
    Date of Patent: October 24, 2023
    Assignee: INTEL CORPORATION
    Inventors: Joydeep Ray, Ben Ashbaugh, Prasoonkumar Surti, Pradeep Ramani, Rama Harihara, Jerin C. Justin, Jing Huang, Xiaoming Cui, Timothy B. Costa, Ting Gong, Elmoustapha Ould-ahmed-vall, Kumar Balasubramanian, Anil Thomas, Oguz H. Elibol, Jayaram Bobba, Guozhong Zhuang, Bhavani Subramanian, Gokce Keskin, Chandrasekaran Sakthivel, Rajesh Poornachandran
  • Publication number: 20230230289
    Abstract: Embodiments are generally directed to compression in machine learning and deep learning processing. An embodiment of an apparatus for compression of untyped data includes a graphical processing unit (GPU) including a data compression pipeline, the data compression pipeline including a data port coupled with one or more shader cores, wherein the data port is to allow transfer of untyped data without format conversion, and a 3D compression/decompression unit to provide for compression of untyped data to be stored to a memory subsystem and decompression of untyped data from the memory subsystem.
    Type: Application
    Filed: January 10, 2023
    Publication date: July 20, 2023
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Ben Ashbaugh, Prasoonkumar Surti, Pradeep Ramani, Rama Harihara, Jerin C. Justin, Jing Huang, Xiaoming Cui, Timothy B. Costa, Ting Gong, Elmoustapha Ould-ahmed-vall, Kumar Balasubramanian, Anil Thomas, Oguz H. Elibol, Jayaram Bobba, Guozhong Zhuang, Bhavani Subramanian, Gokce Keskin, Chandrasekaran Sakthivel, Rajesh Poornachandran
  • Patent number: 11636319
    Abstract: An embodiment of a semiconductor package apparatus may include technology to process one or more vectors with a sum of squares operation with a layer of a multi-layer neural network, and determine a fixed-point approximation for the sum of squares operation. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: April 25, 2023
    Assignee: Intel Corporation
    Inventors: Gokce Keskin, Anil Thomas, Oguz Elibol
  • Patent number: 11557064
    Abstract: Embodiments are generally directed to compression in machine learning and deep learning processing. An embodiment of an apparatus for compression of untyped data includes a graphical processing unit (GPU) including a data compression pipeline, the data compression pipeline including a data port coupled with one or more shader cores, wherein the data port is to allow transfer of untyped data without format conversion, and a 3D compression/decompression unit to provide for compression of untyped data to be stored to a memory subsystem and decompression of untyped data from the memory subsystem.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: January 17, 2023
    Inventors: Joydeep Ray, Ben Ashbaugh, Prasoonkumar Surti, Pradeep Ramani, Rama Harihara, Jerin C. Justin, Jing Huang, Xiaoming Cui, Timothy B. Costa, Ting Gong, Elmoustapha Ould-ahmed-vall, Kumar Balasubramanian, Anil Thomas, Oguz H. Elibol, Jayaram Bobba, Guozhong Zhuang, Bhavani Subramanian, Gokce Keskin, Chandrasekaran Sakthivel, Rajesh Poornachandran
  • Publication number: 20200258263
    Abstract: Embodiments are generally directed to compression in machine learning and deep learning processing. An embodiment of an apparatus for compression of untyped data includes a graphical processing unit (GPU) including a data compression pipeline, the data compression pipeline including a data port coupled with one or more shader cores, wherein the data port is to allow transfer of untyped data without format conversion, and a 3D compression/decompression unit to provide for compression of untyped data to be stored to a memory subsystem and decompression of untyped data from the memory subsystem.
    Type: Application
    Filed: January 23, 2020
    Publication date: August 13, 2020
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Ben Ashbaugh, Prasoonkumar Surti, Pradeep Ramani, Rama Harihara, Jerin C. Justin, Jing Huang, Xiaoming Cui, Timothy B. Costa, Ting Gong, Elmoustapha Ould-ahmed-vall, Kumar Balasubramanian, Anil Thomas, Oguz H. Elibol, Jayaram Bobba, Guozhong Zhuang, Bhavani Subramanian, Gokce Keskin, Chandrasekaran Sakthivel, Rajesh Poornachandran
  • Patent number: 10546393
    Abstract: Embodiments are generally directed to compression in machine learning and deep learning processing. An embodiment of an apparatus for compression of untyped data includes a graphical processing unit (GPU) including a data compression pipeline, the data compression pipeline including a data port coupled with one or more shader cores, wherein the data port is to allow transfer of untyped data without format conversion, and a 3D compression/decompression unit to provide for compression of untyped data to be stored to a memory subsystem and decompression of untyped data from the memory subsystem.
    Type: Grant
    Filed: December 30, 2017
    Date of Patent: January 28, 2020
    Assignee: INTEL CORPORATION
    Inventors: Joydeep Ray, Ben Ashbaugh, Prasoonkumar Surti, Pradeep Ramani, Rama Harihara, Jerin C. Justin, Jing Huang, Xiaoming Cui, Timothy B. Costa, Ting Gong, Elmoustapha Ould-Ahmed-Vall, Kumar Balasubramanian, Anil Thomas, Oguz H. Elibol, Jayaram Bobba, Guozhong Zhuang, Bhavani Subramanian, Gokce Keskin, Chandrasekaran Sakthivel, Rajesh Poornachandran
  • Patent number: 10534613
    Abstract: Implementations of the disclosure provide a processing device comprising a branch predictor circuit to obtain a branch history for an application. The branch history comprising references to branching instructions associated with the application and an outcome of executing each branch. Using the branch history, a neutral network is trained to produce a weighted value for each branch of the branching instructions. Features of the branching instructions are identified based on the weighted values. Each feature identifying predictive information regarding the outcome of at least one branch of correlated branches having corresponding outcomes. A feature vector is determined based on the features. The feature vector comprises a plurality of data fields that identify an occurrence of a corresponding feature of the correlated branches with respect to the branch history. Using the feature vector, a data model is produced to determine a predicted outcome associated with the correlated branches.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: January 14, 2020
    Assignee: Intel Corporation
    Inventors: Gokce Keskin, Stephen J. Tarsa, Gautham N. Chinya, Tsung-Han Lin, Perry H. Wang, Hong Wang
  • Patent number: 10387797
    Abstract: A processor includes a front end to decode an instruction, an allocator to pass the instruction to a nearest neighbor logic unit (NNLU) to execute the instruction, and a retirement unit to retire the instruction. The NNLU includes logic to determine input of the instruction for which nearest neighbors will be calculated, transform the input, retrieve candidate atoms for which the nearest neighbors will be calculated, compute distance between the candidate atoms and the input, and determine the nearest neighbors for the input based upon the computed distance.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: August 20, 2019
    Assignee: Intel Corporation
    Inventors: Tsung-Han Lin, Gokce Keskin, Hsiang-Tsung Kung, She-Hwa Yen, Hong Wang
  • Publication number: 20190206090
    Abstract: Embodiments are generally directed to compression in machine learning and deep learning processing. An embodiment of an apparatus for compression of untyped data includes a graphical processing unit (GPU) including a data compression pipeline, the data compression pipeline including a data port coupled with one or more shader cores, wherein the data port is to allow transfer of untyped data without format conversion, and a 3D compression/decompression unit to provide for compression of untyped data to be stored to a memory subsystem and decompression of untyped data from the memory subsystem.
    Type: Application
    Filed: December 30, 2017
    Publication date: July 4, 2019
    Applicant: Intel Corporation
    Inventors: Joydeep Ray, Ben Ashbaugh, Prasoonkumar Surti, Pradeep Ramani, Rama Harihara, Jerin C. Justin, Jing Huang, Xiaoming Cui, Timothy B. Costa, Ting Gong, Elmoustapha Ould-Ahmed-Vall, Kumar Balasubramanian, Anil Thomas, Oguz H. Elibol, Jayaram Bobba, Guozhong Zhuang, Bhavani Subramanian, Gokce Keskin, Chandrasekaran Sakthivel, Rajesh Poornachandran
  • Publication number: 20190205736
    Abstract: An apparatus to facilitate compute optimization is disclosed. The apparatus includes a at least one processor to perform operations to implement a neural network and compute logic to accelerate neural network computations.
    Type: Application
    Filed: December 29, 2017
    Publication date: July 4, 2019
    Applicant: Intel Corporation
    Inventors: Amit Bleiweiss, Abhishek Venkatesh, Gokce Keskin, John Gierach, Oguz Elibol, Tomer Bar-On, Huma Abidi, Devan Burke, Jaikrishnan Menon, Eriko Nurvitadhi, Pruthvi Gowda Thorehosur Appajigowda, Travis T. Schluessler, Dhawal Srivastava, Nishant Patel, Anil Thomas
  • Publication number: 20190042927
    Abstract: An embodiment of a semiconductor package apparatus may include technology to process one or more vectors with a sum of squares operation with a layer of a multi-layer neural network, and determine a fixed-point approximation for the sum of squares operation. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: August 22, 2018
    Publication date: February 7, 2019
    Applicant: Intel Corporation
    Inventors: Gokce Keskin, Anil Thomas, Oguz Elibol
  • Publication number: 20190004802
    Abstract: A processor, including: an execution unit including branching circuitry; a branch predictor, including a hard-to-predict (HTP) branch filter to identify an HTP branch; and a special branch predictor to receive identification of an HTP branch from the HTP branch filter, the special branch predictor including a convolutional neural network (CNN) branch predictor to predict a branching action for the HTP branch.
    Type: Application
    Filed: June 29, 2017
    Publication date: January 3, 2019
    Inventors: Stephen J. Tarsa, Gokce Keskin, Gautham N. Chinya, Hong Wang
  • Publication number: 20180314524
    Abstract: Implementations of the disclosure provide a processing device comprising a branch predictor circuit to obtain a branch history for an application. The branch history comprising references to branching instructions associated with the application and an outcome of executing each branch. Using the branch history, a neutral network is trained to produce a weighted value for each branch of the branching instructions. Features of the branching instructions are identified based on the weighted values. Each feature identifying predictive information regarding the outcome of at least one branch of correlated branches having corresponding outcomes. A feature vector is determined based on the features. The feature vector comprises a plurality of data fields that identify an occurrence of a corresponding feature of the correlated branches with respect to the branch history. Using the feature vector, a data model is produced to determine a predicted outcome associated with the correlated branches.
    Type: Application
    Filed: April 28, 2017
    Publication date: November 1, 2018
    Inventors: Gokce Keskin, Stephen J. Tarsa, Gautham N. Chinya, Tsung-Han Lin, Perry H. Wang, Hong Wang
  • Publication number: 20180246762
    Abstract: In one embodiment, a processor comprises a processor optimization unit. The processor optimization unit is to collect runtime information associated with a computing device, wherein the runtime information comprises information indicating a performance of the computing device during program execution. The processor optimization unit is further to receive runtime optimization information for the computing device, wherein the runtime optimization information comprises information associated with one or more runtime optimizations for the computing device, and wherein the runtime optimization information is determined based on an analysis of the collected runtime information. The processor optimization unit is further to perform the one or more runtime optimizations for the computing device based on the runtime optimization information.
    Type: Application
    Filed: February 28, 2017
    Publication date: August 30, 2018
    Applicant: Intel Corporation
    Inventors: Stephen J. Tarsa, Gautham N. Chinya, Gokce Keskin, Hong Wang, Karthik Sankaranarayanan
  • Patent number: 9794089
    Abstract: Some embodiments include apparatus and methods having an input to receive an input signal, additional inputs to receive clock signals having different phases to sample the input signal, and a decision feedback equalizer (DFE) having DFE slices. The DFE slices include a number of data comparators to provide data information based on the sampling of the input signal, and a number of phase error comparators to provide phase error information associated with the sampling of the input signal. The number of phase error comparators of the DFE slices is not greater than the number of data comparators of the DFE slices.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: October 17, 2017
    Assignee: Intel Corporation
    Inventors: Tawfiq Musah, Gokce Keskin, Ganesh Balamurugan, James E. Jaussi, Bryan K. Casper
  • Publication number: 20170091655
    Abstract: A processor includes a front end to decode an instruction, an allocator to pass the instruction to a nearest neighbor logic unit (NNLU) to execute the instruction, and a retirement unit to retire the instruction. The NNLU includes logic to determine input of the instruction for which nearest neighbors will be calculated, transform the input, retrieve candidate atoms for which the nearest neighbors will be calculated, compute distance between the candidate atoms and the input, and determine the nearest neighbors for the input based upon the computed distance.
    Type: Application
    Filed: September 25, 2015
    Publication date: March 30, 2017
    Inventors: Tsung-Han Lin, Gokce Keskin, Hsiang-Tsung Kung, She-Hwa Yen, Hong Wang
  • Publication number: 20160301548
    Abstract: Some embodiments include apparatus and methods having an input to receive an input signal, additional inputs to receive clock signals having different phases to sample the input signal, and a decision feedback equalizer (DFE) having DFE slices. The DFE slices include a number of data comparators to provide data information based on the sampling of the input signal, and a number of phase error comparators to provide phase error information associated with the sampling of the input signal. The number of phase error comparators of the DFE slices is not greater than the number of data comparators of the DFE slices.
    Type: Application
    Filed: June 20, 2016
    Publication date: October 13, 2016
    Inventors: Tawfiq Musah, Gokce Keskin, Ganesh Balamurugan, James E. Jaussi, Bryan K. Casper
  • Publication number: 20160182259
    Abstract: Some embodiments include apparatus and methods having an input to receive an input signal, additional inputs to receive clock signals having different phases to sample the input signal, and a decision feedback equalizer (DFE) having DFE slices. The DFE slices include a number of data comparators to provide data information based on the sampling of the input signal, and a number of phase error comparators to provide phase error information associated with the sampling of the input signal. The number of phase error comparators of the DFE slices is not greater than the number of data comparators of the DFE slices.
    Type: Application
    Filed: December 17, 2014
    Publication date: June 23, 2016
    Inventors: Tawfiq Musah, Gokce Keskin, Ganesh Balamurugan, James E. Jaussi, Bryan K. Casper
  • Patent number: 9374250
    Abstract: Some embodiments include apparatus and methods having an input to receive an input signal, additional inputs to receive clock signals having different phases to sample the input signal, and a decision feedback equalizer (DFE) having DFE slices. The DFE slices include a number of data comparators to provide data information based on the sampling of the input signal, and a number of phase error comparators to provide phase error information associated with the sampling of the input signal. The number of phase error comparators of the DFE slices is not greater than the number of data comparators of the DFE slices.
    Type: Grant
    Filed: December 17, 2014
    Date of Patent: June 21, 2016
    Assignee: Intel Corporation
    Inventors: Tawfiq Musah, Gokce Keskin, Ganesh Balamurugan, James E. Jaussi, Bryan K. Casper