Patents by Inventor Kevin Fu

Kevin Fu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240111826
    Abstract: An apparatus to facilitate hardware enhancements for double precision systolic support is disclosed. The apparatus includes matrix acceleration hardware having double-precision (DP) matrix multiplication circuitry including a multiplier circuits to multiply pairs of input source operands in a DP floating-point format; adders to receive multiplier outputs from the multiplier circuits and accumulate the multiplier outputs in a high precision intermediate format; an accumulator circuit to accumulate adder outputs from the adders with at least one of a third global source operand on a first pass of the DP matrix multiplication circuitry or an intermediate result from the first pass on a second pass of the DP matrix multiplication circuitry, wherein the accumulator circuit to generate an accumulator output in the high precision intermediate format; and a down conversion and rounding circuit to down convert and round an output of the second pass as final result in the DP floating-point format.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 4, 2024
    Applicant: Intel Corporation
    Inventors: Jiasheng Chen, Kevin Hurd, Changwon Rhee, Jorge Parra, Fangwen Fu, Theo Drane, William Zorn, Peter Caday, Gregory Henry, Guei-Yuan Lueh, Farzad Chehrazi, Amit Karande, Turbo Majumder, Xinmin Tian, Milind Girkar, Hong Jiang
  • Patent number: 11900688
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: February 13, 2024
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Publication number: 20240022809
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: August 8, 2023
    Publication date: January 18, 2024
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Publication number: 20230367808
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Application
    Filed: July 5, 2023
    Publication date: November 16, 2023
    Inventors: Yi XU, Mayank GUPTA, Xia YANG, Yuanyuan CHEN, Zixiao (Shawn) WANG, Qiang (Kevin) FU, Yunchao GONG, Naresh NAGABUSHAN
  • Publication number: 20230298359
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Application
    Filed: January 17, 2023
    Publication date: September 21, 2023
    Inventors: Yi XU, Mayank GUPTA, Xia YANG, Yuanyuan CHEN, Zixiao (Shawn) WANG, Qiang (Kevin) FU, Yunchao GONG, Naresh NAGABUSHAN
  • Patent number: 11765452
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: September 19, 2023
    Assignee: GOOGLE LLC
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 11734343
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: August 22, 2023
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Publication number: 20230156322
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: January 13, 2023
    Publication date: May 18, 2023
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 11594043
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: February 28, 2023
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Patent number: 11586667
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Grant
    Filed: July 20, 2022
    Date of Patent: February 21, 2023
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Patent number: 11558546
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: January 17, 2023
    Assignee: Google LLC
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 11429664
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: August 30, 2022
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Publication number: 20220166919
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: November 24, 2020
    Publication date: May 26, 2022
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 11209454
    Abstract: Cyber-physical systems depend on sensors to make automated decisions. Resonant acoustic injection attacks are already known to cause malfunctions by disabling MEMS-based gyroscopes. However, an open question remains on how to move beyond denial of service attacks to achieve full adversarial control of sensor outputs. This work investigates how analog acoustic injection attacks can damage the digital integrity of a popular type of sensor: the capacitive MEMS accelerometer. Spoofing such sensors with intentional acoustic interference enables an out-of-spec pathway for attackers to deliver chosen digital values to microprocessors and embedded systems that blindly trust the unvalidated integrity of sensor outputs. Two software-based solutions are presented for mitigating acoustic interference with output of a MEMS accelerometer and other types of motion sensors.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: December 28, 2021
    Assignee: THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Kevin Fu, Peter Honeyman, Timothy Trippel, Ofir Weisse
  • Publication number: 20200300883
    Abstract: Cyber-physical systems depend on sensors to make automated decisions. Resonant acoustic injection attacks are already known to cause malfunctions by disabling MEMS-based gyroscopes. However, an open question remains on how to move beyond denial of service attacks to achieve full adversarial control of sensor outputs. This work investigates how analog acoustic injection attacks can damage the digital integrity of a popular type of sensor: the capacitive MEMS accelerometer. Spoofing such sensors with intentional acoustic interference enables an out-of-spec pathway for attackers to deliver chosen digital values to microprocessors and embedded systems that blindly trust the unvalidated integrity of sensor outputs. Two software-based solutions are presented for mitigating acoustic interference with output of a MEMS accelerometer and other types of motion sensors.
    Type: Application
    Filed: May 19, 2017
    Publication date: September 24, 2020
    Inventors: Kevin FU, Peter HONEYMAN, Timothy TRIPPEL, Ofir WEISSE
  • Patent number: 10038564
    Abstract: A technique is presented for performing a physical unclonable function (PUF) using an array of SRAM cells. The technique can be viewed as an attempt to read multiple cells in a column at the same time, creating contention that is resolved according to process variation. An authentication challenge is issued to the array of SRAM cells by activating two or more wordlines concurrently. The response is simply the value that the SRAM produces from a read operation when the challenge condition is applied. The number of challenges that can be applied the array of SRAM cells grows exponentially with the number of SRAM rows and these challenges can be applied at any time without power cycling.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: July 31, 2018
    Assignee: The Regents of The University of Michigan
    Inventors: Daniel E. Holcomb, Kevin Fu
  • Publication number: 20170373862
    Abstract: A technique is presented for performing a physical unclonable function (PUF) using an array of SRAM cells. The technique can be viewed as an attempt to read multiple cells in a column at the same time, creating contention that is resolved according to process variation. An authentication challenge is issued to the array of SRAM cells by activating two or more wordlines concurrently. The response is simply the value that the SRAM produces from a read operation when the challenge condition is applied. The number of challenges that can be applied the array of SRAM cells grows exponentially with the number of SRAM rows and these challenges can be applied at any time without power cycling.
    Type: Application
    Filed: August 23, 2017
    Publication date: December 28, 2017
    Inventors: Daniel E. HOLCOMB, Kevin FU
  • Patent number: 9787481
    Abstract: A technique is presented for performing a physical unclonable function (PUF) using an array of SRAM cells. The technique can be viewed as an attempt to read multiple cells in a column at the same time, creating contention that is resolved according to process variation. An authentication challenge is issued to the array of SRAM cells by activating two or more wordlines concurrently. The response is simply the value that the SRAM produces from a read operation when the challenge condition is applied. The number of challenges that can be applied the array of SRAM cells grows exponentially with the number of SRAM rows and these challenges can be applied at any time without power cycling.
    Type: Grant
    Filed: August 28, 2015
    Date of Patent: October 10, 2017
    Assignee: THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Daniel E. Holcomb, Kevin Fu
  • Publication number: 20160065379
    Abstract: A technique is presented for performing a physical unclonable function (PUF) using an array of SRAM cells. The technique can be viewed as an attempt to read multiple cells in a column at the same time, creating contention that is resolved according to process variation. An authentication challenge is issued to the array of SRAM cells by activating two or more wordlines concurrently. The response is simply the value that the SRAM produces from a read operation when the challenge condition is applied. The number of challenges that can be applied the array of SRAM cells grows exponentially with the number of SRAM rows and these challenges can be applied at any time without power cycling.
    Type: Application
    Filed: August 28, 2015
    Publication date: March 3, 2016
    Inventors: Daniel E. HOLCOMB, Kevin FU
  • Patent number: 8094810
    Abstract: A method for performing unidirectional proxy re-encryption includes generating a first key pair comprising a public key (pk) and a secret key (sk) and generating a re-encryption key that changes encryptions under a first public key pka into encryptions under a second public key pkb as rkA?B. The method further includes performing one of the group consisting of encrypting a message m under public key pka producing a ciphertext ca, re-encrypting a ciphertext ca using the re-encryption key rkA?B that changes ciphertexts under pka into ciphertexts under pkb to produce a ciphertext cb under pkb, and decrypting a ciphertext ca under pka to recover a message m. The method also includes encrypting a message m under a public key pk producing a first-level ciphertext c1 that cannot be re-encrypted, and decrypting a first-level ciphertext c1 using secret key sk.
    Type: Grant
    Filed: February 3, 2006
    Date of Patent: January 10, 2012
    Assignees: Massachusetts Institute of Technology, The Johns Hopkins University
    Inventors: Susan R. Hohenberger, Kevin Fu, Giuseppe Ateniese, Matthew Green