Patents by Inventor Kevin Fu

Kevin Fu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250148805
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Application
    Filed: December 27, 2024
    Publication date: May 8, 2025
    Inventors: Yi XU, Mayank GUPTA, Xia YANG, Yuanyuan CHEN, Zixiao (Shawn) WANG, Qiang (Kevin) FU, Yunchao GONG, Naresh NAGABUSHAN
  • Patent number: 12205380
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Grant
    Filed: January 3, 2024
    Date of Patent: January 21, 2025
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Publication number: 20240380970
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: July 25, 2024
    Publication date: November 14, 2024
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 12056183
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Grant
    Filed: July 5, 2023
    Date of Patent: August 6, 2024
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Patent number: 12052492
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Grant
    Filed: August 8, 2023
    Date of Patent: July 30, 2024
    Assignee: GOOGLE LLC
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Publication number: 20240233397
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Application
    Filed: January 3, 2024
    Publication date: July 11, 2024
    Inventors: Yi XU, Mayank GUPTA, Xia YANG, Yuanyuan CHEN, Zixiao (Shawn) WANG, Qiang (Kevin) FU, Yunchao GONG, Naresh NAGABUSHAN
  • Publication number: 20240187735
    Abstract: Rolling shutter and movable lens structures widely found in smartphone cameras modulate structure-borne sounds onto camera images, creating a point-of-view optical-acoustic side channel for acoustic eavesdropping. The movement of smartphone camera hardware leaks acoustic information because images unwittingly modulate ambient sound as imperceptible distortions. Experiments have found that the side channel is further amplified by intrinsic behaviors of complementary metal-oxide-semiconductor (CMOS) rolling shutters and movable lenses such as in optical image stabilization (OIS) and auto focus (AF). This disclosure characterizes the limits of acoustic information leakage caused by structure-borne sound that perturbs the point-of-view of smartphone cameras. In contrast with traditional optical-acoustic eavesdropping on vibrating objects, this side channel requires no line of sight and no object within the camera's field of view.
    Type: Application
    Filed: October 17, 2023
    Publication date: June 6, 2024
    Applicants: THE REGENTS OF THE UNIVERSITY OF MICHIGAN, UNIVERSITY OF FLORIDA RESEARCH FOUNDATION, INC.
    Inventors: Yan LONG, Kevin FU, Kevin BUTLER, Sara RAMPAZZI, Pirouz NAGHAVI
  • Patent number: 11900688
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: February 13, 2024
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Publication number: 20240022809
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: August 8, 2023
    Publication date: January 18, 2024
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Publication number: 20230367808
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Application
    Filed: July 5, 2023
    Publication date: November 16, 2023
    Inventors: Yi XU, Mayank GUPTA, Xia YANG, Yuanyuan CHEN, Zixiao (Shawn) WANG, Qiang (Kevin) FU, Yunchao GONG, Naresh NAGABUSHAN
  • Publication number: 20230298359
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Application
    Filed: January 17, 2023
    Publication date: September 21, 2023
    Inventors: Yi XU, Mayank GUPTA, Xia YANG, Yuanyuan CHEN, Zixiao (Shawn) WANG, Qiang (Kevin) FU, Yunchao GONG, Naresh NAGABUSHAN
  • Patent number: 11765452
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: September 19, 2023
    Assignee: GOOGLE LLC
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 11734343
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: August 22, 2023
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Publication number: 20230156322
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: January 13, 2023
    Publication date: May 18, 2023
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 11594043
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: February 28, 2023
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Patent number: 11586667
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Grant
    Filed: July 20, 2022
    Date of Patent: February 21, 2023
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Patent number: 11558546
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: January 17, 2023
    Assignee: Google LLC
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 11429664
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: August 30, 2022
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Publication number: 20220166919
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: November 24, 2020
    Publication date: May 26, 2022
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 11209454
    Abstract: Cyber-physical systems depend on sensors to make automated decisions. Resonant acoustic injection attacks are already known to cause malfunctions by disabling MEMS-based gyroscopes. However, an open question remains on how to move beyond denial of service attacks to achieve full adversarial control of sensor outputs. This work investigates how analog acoustic injection attacks can damage the digital integrity of a popular type of sensor: the capacitive MEMS accelerometer. Spoofing such sensors with intentional acoustic interference enables an out-of-spec pathway for attackers to deliver chosen digital values to microprocessors and embedded systems that blindly trust the unvalidated integrity of sensor outputs. Two software-based solutions are presented for mitigating acoustic interference with output of a MEMS accelerometer and other types of motion sensors.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: December 28, 2021
    Assignee: THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Kevin Fu, Peter Honeyman, Timothy Trippel, Ofir Weisse