Patents Assigned to AltumView Systems Inc.
  • Patent number: 11645765
    Abstract: Embodiments described herein provide various examples of real-time visual object tracking. In another aspect, a process for performing a local re-identification of a target object which was earlier detected in a video but later lost when tracking the target object is disclosed. This process begins by receiving a current video frame of the video and a predicted location of the target object. The process then places a current search window in the current video frame centered on or in the vicinity of the predicted location of the target object. Next, the process extracts a feature map from an image patch within the current search window. The process further retrieves a set of stored feature maps computed at a set of previously-determined locations of the target object from a set of previously-processed video frames in the video. The process next computes a set of correlation maps between the feature map and each of the set of stored feature maps.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: May 9, 2023
    Assignee: AltumView Systems Inc.
    Inventors: Yu Gao, Xing Wang, Rui Ma, Chao Shen, Minghua Chen, Jie Liang, Jianbing Wu
  • Publication number: 20220147736
    Abstract: Various embodiments of predicting human actions are disclosed. In one aspect, a human action prediction system first receives a sequence of video images including at least a first person. Next, for each image in the sequence of video image, the human action prediction system detects the first person in the video image; and subsequently extracts a skeleton figure of the detected first person from the detected image of the first person, wherein the skeleton figure is composed of a set of human keypoints of the detected first person. Next, human action prediction system combines a sequence of extracted skeleton figures of the detected first person from the sequence of video images to form a first skeleton sequence of the detected first person which depicts a continuous motion of the detected first person.
    Type: Application
    Filed: November 9, 2021
    Publication date: May 12, 2022
    Applicant: AltumView Systems Inc.
    Inventors: Chi Chung Chan, Dong Zhang, Yu Gao, Andrew Tsun-Hong Au, Zachary DeVries, Jie Liang
  • Publication number: 20220114739
    Abstract: Embodiments described herein provide various examples of real-time visual object tracking. In another aspect, a process for performing a local re-identification of a target object which was earlier detected in a video but later lost when tracking the target object is disclosed. This process begins by receiving a current video frame of the video and a predicted location of the target object. The process then places a current search window in the current video frame centered on or in the vicinity of the predicted location of the target object. Next, the process extracts a feature map from an image patch within the current search window. The process further retrieves a set of stored feature maps computed at a set of previously-determined locations of the target object from a set of previously-processed video frames in the video. The process next computes a set of correlation maps between the feature map and each of the set of stored feature maps.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Applicant: AltumView Systems Inc.
    Inventors: Yu Gao, Xing Wang, Rui Ma, Chao Shen, Minghua Chen, Jie Liang, Jianbing Wu
  • Publication number: 20220079472
    Abstract: Various embodiments of a fall-detection system for detecting personal fall while preserving the privacy of a detected person are disclosed. This disclosed fall-detection system can begin by receiving a sequence of video images comprising a person being monitored. The disclosed fall-detection system then processes each video image in the sequence of video images by: detecting the person in the video image; and extracting a skeletal figure of the detected person by identifying a set of human keypoints from the detected person. Next, the disclosed fall-detection system processes the sequence of skeletal figures corresponding to the sequence of video images by labeling each skeletal figure in the sequence of skeletal figures with an action among a set of predetermined actions. The disclosed fall-detection system subsequently generates a fall/non-fall decision for the detected person based on the set of action labels corresponding to the sequence of video images.
    Type: Application
    Filed: November 23, 2021
    Publication date: March 17, 2022
    Applicant: AltumView Systems Inc.
    Inventors: Andrew Tsun-Hong Au, Dong Zhang, Chi Chung Chan, Jiannan Zheng
  • Patent number: 11205274
    Abstract: Embodiments described herein provide examples of a real-time visual object tracking system. In one aspect, an unmanned aerial vehicle (UAV) capable of performing real-time visual tracking of a moving object includes: a processor; a memory coupled to the processor; and a camera to capture a video of the moving object. This UAV additionally includes a visual tracking module to: receive a first video image and a first location of the object; receive a second video image following the first video image; place a first search window in the first video image and a second search window in the second video image centered on a second location in the second video image having the same coordinates as the first location; compute a correlation between an image patch within the first search window and an image patch within the second search window; and determine an updated location of the object in the second video image.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: December 21, 2021
    Assignee: AltumView Systems Inc.
    Inventors: Yu Gao, Xing Wang, Eric Honsch, Rui Ma, Chao Shen, Minghua Chen, Ye Lu, Jie Liang, Jianbing Wu
  • Publication number: 20210378554
    Abstract: Various embodiments of a health-monitoring system are disclosed. In one aspect, this health-monitoring system includes a server and a set of health-monitoring sensors communicatively coupled to the server. In some embodiments, the server is configured to establish a new profile for a person to be monitored by: receiving a new profile request for establishing the new profile; generating a unique person-ID for the received new profile request; creating a new entry including the unique person-ID for the person in a profile database; and transmitting the unique person-ID along with the profile photos of the person to the set of health-monitoring sensors. Moreover, each health-monitoring sensor in the set of sensors is configured to establish the new profile by adding a new entry for the person in a person-ID dictionary of the health-monitoring sensor based on the received person-ID and the profile photos of the person.
    Type: Application
    Filed: August 23, 2021
    Publication date: December 9, 2021
    Applicant: AltumView Systems Inc.
    Inventors: Andrew Tsun-Hong Au, Dong Zhang, Chi Chung Chan, Jiannan Zheng
  • Patent number: 10943096
    Abstract: Embodiments described herein provide various examples of a face-image training data preparation system for performing large-scale face-image training data acquisition, preprocessing, cleaning, balancing, and post-processing. The disclosed training data preparation system can collect a very large set of loosely-labeled images of different people from the public domain, and then generate a raw training dataset including a set of incorrectly-labeled face images. The disclosed training data preparation system can then perform cleaning and balancing operations on the raw training dataset to generate a high-quality face-image training dataset free of the incorrectly-labeled face images. The processed high-quality face-image training dataset can be subsequently used to train deep-neural-network-based face recognition systems to achieve high performance in various face recognition applications.
    Type: Grant
    Filed: December 31, 2017
    Date of Patent: March 9, 2021
    Assignee: AltumView Systems Inc.
    Inventors: Zili Yi, Xing Wang, Him Wai Ng, Sami Ma, Jie Liang
  • Publication number: 20200320278
    Abstract: Embodiments described herein provide various examples of a face-detection system. In one aspect, a process for performing image detections on grayscale images is disclosed. This process can begin by receiving a training image dataset, wherein the training image dataset includes a first subset of color images. The process then converts each image in the first subset of color images in the training image dataset into a grayscale image to obtain a first subset of converted grayscale images. Next, the process trains an image-detection statistical model using the training image dataset including the first subset of converted grayscale images. The process next receives a set of grayscale input images. The process subsequently performs image detections on the set of grayscale input images using the trained image-detection statistical model.
    Type: Application
    Filed: June 23, 2020
    Publication date: October 8, 2020
    Applicant: AltumView Systems Inc.
    Inventors: Him Wai Ng, Xing Wang, Yu Gao, Rui Ma
  • Patent number: 10776939
    Abstract: Embodiments described herein provide various examples of an automatic obstacle avoidance system for unmanned vehicles using embedded stereo vision techniques. In one aspect, an unmanned aerial vehicle (UAV) capable of performing autonomous obstacle detection and avoidance is disclosed. This UAV includes: a stereo vision camera set coupled to the one or more processors and the memory to capture a sequence of stereo images; and a stereo vision module configured to: receive a pair of stereo images captured by a pair of stereo vision cameras; perform a border cropping operation on the pair of stereo images to obtain a pair of cropped stereo images; perform a sub sampling operation on the pair of cropped stereo images to obtain a pair of sub sampled stereo images; and perform a dense stereo matching operation on the pair of sub sampled stereo images to generate a dense three-dimensional (3D) point map of a space corresponding to the pair of stereo images.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: September 15, 2020
    Assignee: AltumView Systems Inc.
    Inventors: Rui Ma, Chao Shen, Yu Gao, Ye Lu, Minghua Chen, Jie Liang, Jianbing Wu
  • Publication number: 20200211154
    Abstract: Various embodiments of a vision-based privacy-preserving embedded fall-detection system are disclosed. This embedded fall-detection system can include one or more cameras for capturing video images of one or more persons. Moreover, this embedded fall-detection system can include various fall-detection modules for processing the captured video images including: a pose-estimation module, an action-recognition module, and a fall-detection module, all of which can perform the intended fall-detection functionalities within the embedded system environment in real-time in order to detect falls of the one or more persons. When a fall is detected, instead of sending the original captured images, the embedded fall-detection system can transmit sanitized video images to the server, wherein each detected person is represented by a skeleton figure in place of the actual person images, thereby preserving the privacy of the detected person.
    Type: Application
    Filed: November 2, 2019
    Publication date: July 2, 2020
    Applicant: AltumView Systems Inc.
    Inventors: Him Wai Ng, Xing Wang, Jiannan Zheng, Andrew Tsun-Hong Au, Chi Chung Chan, Kuan Huan Lin, Dong Zhang, Eric Honsch, Kwun-Keat Chan, Minghua Chen, Yu Gao, Adrian Kee-Ley Auk, Karen Ly-Ma, Adrian Fettes, Jianbing Wu, Ye Lu
  • Publication number: 20200205697
    Abstract: Various embodiments of a video-based fall risk assessment system are disclosed. During operation, this fall risk assessment system can receives a sequence of video frames including a person being monitored for fall risk assessment. The system next generates a sequence of action labels for the sequence of video frames by, for each video frame in the sequence of video frames: estimating a pose of the person within the video frame; and classifying the estimated pose as a given action among a set of predetermined actions. Next, the system identifies a subset of action labels within the sequence of action labels. The system next extracts a set of gait features for the person from a subset of video frames within the sequence of video frames corresponding to the subset of action labels. Subsequently, the system analyzes the set of extracted gait features to generate a fall risk assessment for the person.
    Type: Application
    Filed: December 30, 2019
    Publication date: July 2, 2020
    Applicant: AltumView Systems Inc.
    Inventors: Jiannan Zheng, Chao Shen, Dong Zhang, Jie Liang
  • Patent number: 10691925
    Abstract: Embodiments described herein provide various examples of a real-time face-detection, face-tracking, and face-pose-selection subsystem within an embedded vision system. In one aspect, a process for identifying near-duplicate-face images using this subsystem is disclosed. This process includes the steps of: receiving a determined best-pose-face image associated with a tracked face when the tracked face is determined to be lost; extracting an image feature from the best-pose-face image; computing a set of similarity values between the extracted image feature and each of a set of stored image features in a feature buffer, wherein the set of stored image features are extracted from a set of previously transmitted best-pose-face images; determining if any of the computed similarity values is above a predetermined threshold; and if no computed similarity value is above the predetermined threshold, transmitting the best-pose-face image to a server and storing the extracted image feature into the feature buffer.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: June 23, 2020
    Assignee: AltumView Systems Inc.
    Inventors: Him Wai Ng, Xing Wang, Yu Gao, Rui Ma, Ye Lu
  • Patent number: 10558908
    Abstract: Embodiments described herein provide various examples of an age and gender estimation system capable of performing age and gender classifications on face images having sizes greater than the maximum number of input pixels supported by a given small-scale hardware convolutional neural network (CNN) module. In some embodiments, the proposed age and gender estimation system can first divide a high-resolution input face image into a set of image patches with judiciously designed overlaps among neighbouring patches. Each of the image patches can then be processed with a small-scale CNN module, such as the built-in CNN module in Hi3519 SoC. The outputs corresponding to the set of image patches can be subsequently merged to obtain the output corresponding to the input face image, and the merged output can be further processed by subsequent layers in the age and gender estimation system to generate age and gender classifications for the input face image.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: February 11, 2020
    Assignee: AltumView Systems Inc.
    Inventors: Xing Wang, Mehdi Seyfi, Minghua Chen, Him Wai Ng, Jiannan Zheng, Jie Liang
  • Patent number: 10510157
    Abstract: Embodiments described herein provide various examples of a real-time face-detection, face-tracking, and face-pose-selection subsystem within an embedded video system. In one aspect, a process for performing real-time face-pose-estimation and best-pose selection for a detected person captured in a video is disclosed. This process includes the steps of: receiving a video image among a sequence of video frames of a video; performing a face detection operation on the video image to detect a set of faces in the video image; detecting a new person appears in the video based on the set of detected faces; tracking the new person through subsequent video images in the video by detecting a sequence of face images of the new person in the subsequent video images; and for each of the subsequent video images which contains a detected face of the new person being tracked: estimating a pose associated with the detected face and updating a best pose for the new person based on the estimated pose.
    Type: Grant
    Filed: October 28, 2017
    Date of Patent: December 17, 2019
    Assignee: AltumView Systems Inc.
    Inventors: Mehdi Seyfi, Xing Wang, Minghua Chen, Kaichao Wang, Weiming Wang, Him Wai Ng, Jiannan Zheng, Jie Liang
  • Patent number: 10467458
    Abstract: Embodiments described herein provide various examples of a joint face-detection and head-pose-angle-estimation system based on using a small-scale hardware CNN module such as the built-in CNN module in HiSilicon Hi3519 system-on-chip. In some embodiments, the disclosed joint face-detection and head-pose-angle-estimation system is configured to jointly perform multiple tasks of detecting most or all faces in a sequence of video frames, generating pose-angle estimations for the detected faces, tracking detected faces of a same person across the sequence of video frames, and generating “best-pose” estimation for the person being tracked. The disclosed joint face-detection and pose-angle-estimation system can be implemented on resource-limited embedded systems such as smart camera systems that are only integrated with one or more small-scale CNN modules.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: November 5, 2019
    Assignee: AltumView Systems Inc.
    Inventors: Xing Wang, Mehdi Seyfi, Minghua Chen, Him Wai Ng, Jie Liang
  • Publication number: 20190304120
    Abstract: Embodiments described herein provide various examples of an automatic obstacle avoidance system for unmanned vehicles using embedded stereo vision techniques. In one aspect, an UAV capable of performing autonomous obstacle detection and avoidance is disclosed. This UAV includes: a stereo vision camera set coupled to the one or more processors and the memory to capture a sequence of stereo images; and a stereo vision module configured to: receive a pair of stereo images captured by a pair of stereo vision cameras; perform a border cropping operation on the pair of stereo images to obtain a pair of cropped stereo images; perform a subsampling operation on the pair of cropped stereo images to obtain a pair of subsampled stereo images; and perform a dense stereo matching operation on the pair of subsampled stereo images to generate a dense three-dimensional (3D) point map of a space corresponding to the pair of stereo images.
    Type: Application
    Filed: April 3, 2018
    Publication date: October 3, 2019
    Applicant: AltumView Systems Inc.
    Inventors: Rui Ma, Chao Shen, Yu Gao, Ye Lu, Minghua Chen, Jie Liang, Jianbing Wu
  • Publication number: 20190304105
    Abstract: Embodiments described herein provide examples of a real-time visual object tracking system. In one aspect, an UAV capable of performing real-time visual tracking of a moving object includes a processor; a memory coupled to the processor; and a camera coupled to the processor and the memory and to capture a video of the moving object. This UAV additionally includes a visual tracking module to: receive a first video image and a first location of the object; receive a second video image following the first video image; place a first search window in the first video image and a second search window in the second video image at the same location as the first search window; compute a correlation between an image patch within the first search window and an image patch within the second search window; and determine an updated location of the object in the second video image.
    Type: Application
    Filed: April 3, 2018
    Publication date: October 3, 2019
    Applicant: AltumView Systems Inc.
    Inventors: Yu Gao, Xing Wang, Eric Honsch, Rui Ma, Chao Shen, Minghua Chen, Ye Lu, Jie Liang, Jianbing Wu
  • Patent number: 10360494
    Abstract: Embodiments of a convolutional neural network (CNN) system based on using resolution-limited small-scale CNN modules are disclosed. In some embodiments, a CNN system includes: a receiving module for receiving an input image of a first image size, the receiving module can be used to partition the input image into a set of subimages of a second image size; a first processing stage that includes a first hardware CNN module configured with a maximum input image size, the first hardware CNN module is configured to sequentially receive the set of subimages and sequentially process the received subimages to generate a set of outputs; a merging module for merging the sets of outputs into a set of merged feature maps; and a second processing stage for receiving the set of feature maps and processing the set of feature maps to generate an output including at least one prediction on the input image.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: July 23, 2019
    Assignee: AltumView Systems Inc.
    Inventors: Xing Wang, Him Wai Ng, Jie Liang
  • Publication number: 20190205620
    Abstract: Embodiments described herein provide various examples of a face-image training data preparation system for performing large-scale face-image training data acquisition, pre-processing, cleaning, balancing, and post-processing. The disclosed training data preparation system can collect a very large set of loosely-labeled images of different people from the public domain, and then generate a raw training dataset including a set of incorrectly-labeled face images. The disclosed training data preparation system can then perform cleaning and balancing operations on the raw training dataset to generate a high-quality face-image training dataset free of the incorrectly-labeled face images. The processed high-quality face-image training dataset can be subsequently used to train deep-neural-network-based face recognition systems to achieve high performance in various face recognition applications.
    Type: Application
    Filed: December 31, 2017
    Publication date: July 4, 2019
    Applicant: AltumView Systems Inc.
    Inventors: Zili Yi, Xing Wang, Him Wai Ng, Sami Ma, Jie Liang
  • Publication number: 20190130167
    Abstract: Embodiments described herein provide various examples of a real-time face-detection, face-tracking, and face-pose-selection subsystem within an embedded vision system. In one aspect, a process for identifying near-duplicate-face images using this subsystem is disclosed. This process includes the steps of: receiving a determined best-pose-face image associated with a tracked face when the tracked face is determined to be lost; extracting an image feature from the best-pose-face image; computing a set of similarity values between the extracted image feature and each of a set of stored image features in a feature buffer, wherein the set of stored image features are extracted from a set of previously transmitted best-pose-face images; determining if any of the computed similarity values is above a predetermined threshold; and if no computed similarity value is above the predetermined threshold, transmitting the best-pose-face image to a server and storing the extracted image feature into the feature buffer.
    Type: Application
    Filed: April 3, 2018
    Publication date: May 2, 2019
    Applicant: AltumView Systems Inc.
    Inventors: Him Wai Ng, Xing Wang, Yu Gao, Rui Ma, Ye Lu