Patents by Inventor Kevin Fu

Kevin Fu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12224422
    Abstract: A solventless method of making a dry electrode for an electrochemical cell is provided. A solventless electrode material mixture includes 85-99% electrode active material and from 0-10% conductive carbon additive. A polymer binder system is present from 1-15%. The polymer binder system includes one or more polymer binders. The electrode material mixture is mixed at a temperature greater than a softening point or a melting point of at least one polymer binder of the polymer binder system. The electrode material mixture is kneaded into an electrode material dough. The electrode material dough is formed into an electrode material sheet. At least a portion of the electrode material sheet is affixed to a metal current collector to form an electrode.
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: February 11, 2025
    Assignee: Nano and Advanced Materials Institute Limited
    Inventors: Soon Yee Liew, Yong Zhu, Yam Chong, Yu Tat Tse, Kevin Tan, Shengbo Lu, Li Fu, Chenmin Liu
  • Publication number: 20250044912
    Abstract: Systems and methods for object-based content recommendation are described. A camera feed comprising a plurality of image frames is caused to be displayed at a client device. An object is detected within an image frame from the camera feed, the object corresponding with an object category. Responsive to detecting the object, an icon associated with the object category is selected and displayed at a position upon the camera feed. The icon corresponds with a media collection related to the object category. An input is received selecting the icon. Responsive to the input, a presentation of media items from the media collection is displayed at the client device. By detecting real-world objects and surfacing relevant virtual icons that link to associated media, an augmented reality experience is provided allowing virtual content to be overlaid and anchored to objects in reality.
    Type: Application
    Filed: July 31, 2024
    Publication date: February 6, 2025
    Inventors: Shubham Chawla, Hyojung Chun, Anvi Dalal, Yunchu He, Hao Hu, Sarah Lensing, Yanjia Li, Ana Medinac, Bindi Patel, Patrick Poirson, Chiung-Fu Shih, Jeremy Staub, Kevin Dechau Tang, Ryan Tran, Andrew Wan, Cindy Wang, Alireza Zareian
  • Publication number: 20250046026
    Abstract: The present disclosure relates to augmented reality devices and related methods. The augmented reality devices include a projection system. The projection system includes a projector including a major axis. The projected is configured to project an image along the major axis. A prism is configured to refract the image. The image includes a first spectrum, a second spectrum, and a third spectrum. A waveguide is disposed at a wrap angle from a plane formed from the major axis of the projector. The waveguide includes an input coupler, and an output coupler.
    Type: Application
    Filed: August 2, 2024
    Publication date: February 6, 2025
    Inventors: David Alexander SELL, Sihui HE, Kevin MESSER, Kunal SHASTRI, Jinxin FU, Samarth BHARGAVA
  • Publication number: 20250036412
    Abstract: Described herein is a graphics processor comprising a memory interface and a graphics processing cluster coupled with the memory interface. The graphics processing cluster includes a plurality of processing resources. A processing resource of the plurality of processing resources includes a source crossbar communicatively coupled with a register file, the source crossbar to reorder data elements of a source operand and a format conversion pipeline to convert a plurality of input data elements specified by the source operand from a first format of a plurality of datatype formats to a second format of the plurality of datatype formats, the plurality of datatype formats including integer and floating-point formats.
    Type: Application
    Filed: July 25, 2023
    Publication date: January 30, 2025
    Applicant: Intel Corporation
    Inventors: Supratim Pal, Jiasheng Chen, Christopher Spencer, Jorge E. Parra Osorio, Kevin Hurd, Guei-Yuan Lueh, Pradeep K. Golconda, Fangwen Fu, Wei Xiong, Hongzheng Li, James Valerio, Mukundan Swaminathan, Nicholas Murphy, Shuai Mu, Clifford Gibson, Buqi Cheng
  • Publication number: 20250036361
    Abstract: Described herein is a graphics processor comprising a memory interface and a graphics processing cluster coupled with the memory interface. The graphics processing cluster includes a multi-lane parallel floating-point unit and a multi-lane parallel integer unit. The multi-lane parallel integer unit includes an integer pipeline including a plurality of parallel integer logic units configured to perform integer compute operations on a plurality of input data elements and a format conversion pipeline including a plurality of parallel format conversion units configured to convert a plurality of input data elements from a first one of a plurality of datatype formats to a second one of the plurality of datatype formats, the plurality of datatype formats including integer and floating-point formats.
    Type: Application
    Filed: July 25, 2023
    Publication date: January 30, 2025
    Applicant: Intel Corporation
    Inventors: Supratim Pal, Jiasheng Chen, Kevin Hurd, Jorge E. Parra Osorio, Christopher Spencer, Guei-Yuan Lueh, Pradeep K. Golconda, Fangwen Fu, Wei Xiong, Hongzheng Li, James Valerio, Mukundan Swaminathan, Nicholas Murphy, Shuai Mu, Clifford Gibson, Buqi Cheng
  • Publication number: 20250037347
    Abstract: Described herein is a graphics processor comprising an instruction cache and a plurality of processing elements coupled with the instruction cache. The plurality of processing elements include functional units configured to provide an integer pipeline to execute instructions to perform operations on integer data elements. The integer pipeline including a first multiplier and a second multiplier, the first multiplier and the second multiplier configured to execute operations for a single instruction.
    Type: Application
    Filed: July 25, 2023
    Publication date: January 30, 2025
    Applicant: Intel Corporation
    Inventors: Jiasheng Chen, Supratim Pal, Kevin Hurd, Jorge E. Parra Osorio, Christopher Spencer, Takashi Nakagawa, Guei-Yuan Lueh, Pradeep K. Golconda, James Valerio, Mukundan Swaminathan, Nicholas Murphy, Clifford Gibson, Li-An Tang, Fangwen Fu, Kaiyu Chen, Buqi Cheng
  • Publication number: 20250027330
    Abstract: Aspects of the subject technology relate to a removable shelter that electrically and/or communicatively integrates with an electric vehicle when the removable shelter is mounted to the vehicle. The removeable shelter may leverage electrical contacts in accessory mounting ports on the roof, crossbars, and/or truck bed of the vehicle. The removeable shelter may also communicate wirelessly with the electric vehicle. The removable shelter may include any of various integrated electronic accessories that can be powered by the vehicle battery, including, but not limited to, external lighting, external proximity sensing, proximity-based external lighting, interior lighting, air temperature control, other temperature control, speakers, charging ports for mobile phones and/or other devices, and/or other features.
    Type: Application
    Filed: July 19, 2024
    Publication date: January 23, 2025
    Inventors: Kevin Karl MAYER, Nathan Philip WANG, Daniel Geoffrey WALKER, Neil Joseph KWIATKOWSKI, Paula Michelle LOBACCARO, Evan Patrick HIGGINS, Fong Shyr YANG, Kaitlyn Noel OLAH, Jeremy FU, Matthew MATERA, Steven Digby NICOL
  • Patent number: 12205380
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Grant
    Filed: January 3, 2024
    Date of Patent: January 21, 2025
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Publication number: 20240380970
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: July 25, 2024
    Publication date: November 14, 2024
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 12056183
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Grant
    Filed: July 5, 2023
    Date of Patent: August 6, 2024
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Patent number: 12052492
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Grant
    Filed: August 8, 2023
    Date of Patent: July 30, 2024
    Assignee: GOOGLE LLC
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Publication number: 20240233397
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Application
    Filed: January 3, 2024
    Publication date: July 11, 2024
    Inventors: Yi XU, Mayank GUPTA, Xia YANG, Yuanyuan CHEN, Zixiao (Shawn) WANG, Qiang (Kevin) FU, Yunchao GONG, Naresh NAGABUSHAN
  • Publication number: 20240187735
    Abstract: Rolling shutter and movable lens structures widely found in smartphone cameras modulate structure-borne sounds onto camera images, creating a point-of-view optical-acoustic side channel for acoustic eavesdropping. The movement of smartphone camera hardware leaks acoustic information because images unwittingly modulate ambient sound as imperceptible distortions. Experiments have found that the side channel is further amplified by intrinsic behaviors of complementary metal-oxide-semiconductor (CMOS) rolling shutters and movable lenses such as in optical image stabilization (OIS) and auto focus (AF). This disclosure characterizes the limits of acoustic information leakage caused by structure-borne sound that perturbs the point-of-view of smartphone cameras. In contrast with traditional optical-acoustic eavesdropping on vibrating objects, this side channel requires no line of sight and no object within the camera's field of view.
    Type: Application
    Filed: October 17, 2023
    Publication date: June 6, 2024
    Applicants: THE REGENTS OF THE UNIVERSITY OF MICHIGAN, UNIVERSITY OF FLORIDA RESEARCH FOUNDATION, INC.
    Inventors: Yan LONG, Kevin FU, Kevin BUTLER, Sara RAMPAZZI, Pirouz NAGHAVI
  • Patent number: 11900688
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: February 13, 2024
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Publication number: 20240022809
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: August 8, 2023
    Publication date: January 18, 2024
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Publication number: 20230367808
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Application
    Filed: July 5, 2023
    Publication date: November 16, 2023
    Inventors: Yi XU, Mayank GUPTA, Xia YANG, Yuanyuan CHEN, Zixiao (Shawn) WANG, Qiang (Kevin) FU, Yunchao GONG, Naresh NAGABUSHAN
  • Publication number: 20230298359
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Application
    Filed: January 17, 2023
    Publication date: September 21, 2023
    Inventors: Yi XU, Mayank GUPTA, Xia YANG, Yuanyuan CHEN, Zixiao (Shawn) WANG, Qiang (Kevin) FU, Yunchao GONG, Naresh NAGABUSHAN
  • Patent number: 11765452
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: September 19, 2023
    Assignee: GOOGLE LLC
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 11734343
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: August 22, 2023
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Publication number: 20230156322
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: January 13, 2023
    Publication date: May 18, 2023
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari