Patents Assigned to Ulsee Inc.
  • Publication number: 20220188544
    Abstract: A grass detection device is provided in the present invention. The grass detection device includes a camera drone and an image processing unit. The camera drone, for shooting an area to obtain an aerial image data. The image processing unit is configured to perform binarization operations on the aerial image data to finally obtain a grass ground binarization image data, and then compare the aerial image data with the grass ground binarization image data for marking a part of the aerial image data that belongs to the grass ground to finally obtain a grass detection image data.
    Type: Application
    Filed: December 10, 2020
    Publication date: June 16, 2022
    Applicant: ULSee Inc.
    Inventor: Yi-Ta WU
  • Publication number: 20220189033
    Abstract: A boundary detection device is provided in the present invention. The boundary detection device includes a camera drone and an image processing unit. The camera drone, for shooting a region to obtain an aerial image data. The image processing unit is configured to convert the aerial image data from a RGB color space to an XYZ color space, then convert the aerial image data from the XYZ color space to a Lab color space to obtain a Lab color image data, and then operate a brightness feature data and a color feature data according to the Lab color image data. The image processing unit picks first to eighth circular masks, each of the circular masks having a boundary line to divide the mask region into two left and right semicircles with different colors.
    Type: Application
    Filed: December 10, 2020
    Publication date: June 16, 2022
    Applicant: ULSee Inc.
    Inventor: Yi-Ta WU
  • Patent number: 11176394
    Abstract: A face image processing method is provided. The face processing method comprising: extracting a plurality of features from a primary two-dimensional face image to generate a first feature vector; decomposing a three-dimensional face image into a plurality of base face images and a plurality of weighting factors corresponding to the base face images; generating a first two-dimensional code according to the first feature vector; and generating a second two-dimensional code according to the weighting factors.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: November 16, 2021
    Assignee: ULSee Inc.
    Inventor: Zhou Ye
  • Patent number: 10817715
    Abstract: A number-of-people detection system includes at least one binocular camera and a computing circuit. A lens of the binocular camera is configured to capture at least one image of a target area. The computing circuit is electrically connected to the binocular camera, wherein the binocular camera is adapted to transmit the captured image to the computing circuit for analysis. When the captured image of the binocular camera shows that at least one human body in the target area, the computing circuit analyzes the captured image of the binocular camera and calculates a distance from the human body to the binocular camera by using a binocular vision method to determine a three-dimensional world coordinate relationship between the human body and the target area, so as to determine a number of people located in the target area.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: October 27, 2020
    Assignee: ULSee Inc.
    Inventor: Yu-Hao Song
  • Patent number: 10794764
    Abstract: A double-channel miniaturized Raman spectrometer includes a sequentially-connected near-infrared laser diode or near-ultraviolet laser emitter, a collimated laser beam expander, a first beam splitter that retards laser light but penetrates laser light and Raman light, a cylindrical or spherical objective lens with or without zooming, a second beam splitter that retards laser light but penetrates Raman light, a relay optical system, a slit, two spectral lens, a plurality of line-array or matrix-array CCD or CMOS detectors, a GPS, and a data processing and wireless transceiver system. After the laser channel photographing a target and aligning an optical axis and a Raman channel to measure the sample, the data is wirelessly sent to a cell phone and a cloud computer for spectrum separation, peak search, spectral library establishment, material identification and the like in order to obtain a quick conclusion.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: October 6, 2020
    Assignee: ULSee Inc.
    Inventor: Evan Y. W. Zhang
  • Patent number: 10795145
    Abstract: Infrared and night vision optical fusion systems are provided. The first scheme is to add a common-aperture beam splitter in front of the night vision device, which is a band-pass filter having a high transmission for the light with wavelength of 0.78-1 ?m, and a high reflectivity for the visible light with wavelength of 0.38-0.78 ?m and for the infrared light with wavelength of 8-14 ?m. After electrical processing, a target image with a temperature higher or lower than a certain threshold is obtained on the LCD/OLED. The second scheme is to align the night vision objective lens and the infrared objective lens having the same field of view side by side. Since only infrared targets having a temperature above or below a certain threshold are used, white or red humans, animals and vehicles can be clearly seen in a green night vision background with high contrast no matter which scheme is adopted.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: October 6, 2020
    Assignee: ULSee Inc.
    Inventor: Evan Y. W. Zhang
  • Patent number: 10715787
    Abstract: A depth imaging system and a control method thereof are provided. The depth imaging system includes a first imaging device, a second imaging device, a sliding base, a detecting module, an estimating module, a calculating module, and a control module. The first imaging device and the second imaging device are mounted on the sliding base. The detecting module detects a target region of the first image and the second image. The estimating module estimates an initial depth of the target region. The calculating module calculates a baseline corresponding to the initial depth. The control module controls the sliding base to adjust a relative distance between the first imaging device and the second imaging device. The calculating module generates an adjusted baseline according to the adjusted relative distance between the first imaging device and the second imaging device, such that the adjusted baseline is closer to the calculated baseline.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: July 14, 2020
    Assignee: ULSee Inc.
    Inventors: Adam Baumberg, Mark Middlebrook
  • Patent number: 10672181
    Abstract: Previously our 3DSOM software has solved the problem of extracting the object of interest from the scene by allowing the user to cut-out the object shape in several photographs. This is known as manual “masking”. However, we wish to avoid this step to make the system as easy to use as possible. For an unskilled user the process needs to be as simple to use as possible. We propose a method of extracting a complete closed model of an object without the user being required to do anything other than capture the shots.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: June 2, 2020
    Assignee: ULSee Inc.
    Inventor: Adam Michael Baumberg
  • Patent number: 10388022
    Abstract: An image target tracking method and system thereof are provided in the present disclosure. The image target tracking method includes the following steps: obtaining a target initial position, and performing a sparse sampling according to the target initial position; dividing sampling points into foreground sampling points and background sampling points; clustering adjacent foreground sampling points according to a spatial distribution of the foreground sampling points in order to obtain a clustering result containing a plurality of clusters; performing a robust estimation according to the clustering result in order to determine a relative position between a target and a camouflage interference in an image; and generating a prediction trajectory, correlating an observation sample position with the prediction trajectory to generate a correlation result, and determining whether the target is blocked and tracking the target according to the correlation result.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: August 20, 2019
    Assignee: ULSee Inc.
    Inventor: Jingjing Xiao
  • Patent number: 10204288
    Abstract: We propose a tracking framework that explicitly encodes both generic features and category-based features. The tracker consists of a shared convolutional network (NetS), which feeds into two parallel networks, NetC for classification and NetT for tracking. NetS is pre-trained on ImageNet to serve as a generic feature extractor across the different object categories for NetC and NetT. NetC utilizes those features within fully connected layers to classify the object category. NetT has multiple branches, corresponding to multiple categories, to distinguish the tracked object from the background. Since each branch in NetT is trained by the videos of a specific category or groups of similar categories, NetT encodes category-based features for tracking. During online tracking, NetC and NetT jointly determine the target regions with the right category and foreground labels for target estimation.
    Type: Grant
    Filed: April 13, 2017
    Date of Patent: February 12, 2019
    Assignee: ULSee Inc.
    Inventor: Jingjing Xiao
  • Patent number: 10140727
    Abstract: An image target tracking method, device, and system thereof are provided in the present disclosure. The image target relative position determining method includes the following steps: obtaining a target initial position, and performing a sparse sampling according to the target initial position; dividing sampling points into foreground sampling points and background sampling points; clustering adjacent foreground sampling points according to a spatial distribution of the foreground sampling points in order to obtain a clustering result containing a plurality of clusters; and performing a robust estimation according to the clustering result in order to determine a relative position between a target and a camouflage interference in an image.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: November 27, 2018
    Assignee: ULSee Inc.
    Inventor: Jingjing Xiao
  • Patent number: 10121227
    Abstract: A method of reconstructing videos by using super-resolution algorithm includes the steps: (a) providing a video, wherein the video is composed of a plurality of frames having a sequence; (b) starting a first thread and a second thread, the first thread performing a first algorithm and the second thread performing a second algorithm for improving resolution, wherein a time complexity of the first algorithm is greater than a time complexity of the second algorithm; (c) the first thread sequentially reading the frames of the video in units of a first interval and processing the frames in order to obtain first processed frames, and the second thread sequentially reading the frames of the video in units of a second interval and processing the frames in order to obtain second processed frames, wherein a value of the first interval is an integer greater than 1, and a value of the second interval is 1; (d) performing a fusion operation on the second processed frames which is processed by the second thread and the nea
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: November 6, 2018
    Assignee: ULSee Inc.
    Inventor: Yang Liu
  • Patent number: 10115209
    Abstract: An image target tracking method and system thereof are provided in the present disclosure. The image target tracking method includes the following steps: determining a relative position between a target and a camouflage interference in an image; generating a prediction trajectory according to the relative position between the target and the camouflage interference in the image; and correlating an observation sample position with the prediction trajectory to generate a correlation result, and determining whether the target is blocked and tracking the target according to the correlation result. Throughout the process, the prediction trajectory is generated based on the determined relative position between the target and the camouflage interference, and the prediction trajectory is correlated to determine whether the target is blocked and to accurately track the target.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: October 30, 2018
    Assignee: ULSee Inc.
    Inventor: Jingjing Xiao
  • Patent number: 9971933
    Abstract: A facial image screening method is provided in the present disclosure. The facial image screening method includes the following steps: tracking a plurality of feature points of at least one facial image; analyzing each feature point to acquire a region information corresponding to each feature point, wherein the region information comprises an image quality information, a face pose information, and a blocking degree information; scoring the image quality information to obtain a first arbitration score; scoring the face pose information to obtain a second arbitration score; scoring the blocking degree information to obtain a third arbitration score; generating a comprehensive quality score according to the first arbitration score, the second arbitration score, and the third arbitration; and taking the plurality of feature points of the facial image as targets to be compared with captured features of a plurality of specific persons when the comprehensive quality score exceeds a threshold value.
    Type: Grant
    Filed: January 9, 2017
    Date of Patent: May 15, 2018
    Assignee: ULSee Inc.
    Inventor: Bau-Cheng Shen
  • Patent number: 9665984
    Abstract: Method to create try-on experience wearing virtual 3D eyeglasses is provided using 2D image data of eyeglasses. Virtual 3D eyeglasses are constructed using set of 2D images for eyeglasses. Virtual 3D eyeglasses is configured onto 3D face or head model and being simulated as being fittingly worn by the wearer. Each set of 2D images for eyeglasses includes a pair of 2D lens images, a frontal frame image, and at least one side frame image. Upon detection of a movement of the face and head of wearer in real-time, the 3D face or head model and the configuration and alignment of virtual 3D eyeglasses are modified or adjusted accordingly. Features such as trimming off of portion of the glasses frame, shadow creating and environment mapping are provided to the virtual 3D eyeglasses in response to translation, scaling, and posture changes made to the head and face of the wearer in real-time.
    Type: Grant
    Filed: July 31, 2014
    Date of Patent: May 30, 2017
    Assignee: ULSee Inc.
    Inventors: Zhou Ye, Chih-Ming Chang, Ying-Ko Lu, Yi-Chia Hsu
  • Patent number: 9642521
    Abstract: Method for automatically measuring pupillary distance includes extracting facial features of face image, a head current center indicator is shown/displayed based on facial feature extraction, elliptical frame and target center indicator are shown, a first distance between head current center indicator and target center indicator is calculated to see if below a threshold range, then allowing head current center indicator, elliptical frame and target center indicator to disappear. Card window based on facial tracking result is shown. Credit card band detection is performed to see if located within card window. Card window then disappear. Elliptical frame of moving head and target elliptical frame are shown. Elliptical frame of the moving head is aligned with the target elliptical frame and maintaining a correct head posture. If elliptical frame of moving head is aligned with target elliptical frame, then allow them to disappear from view, and performing a pupillary distance measurement.
    Type: Grant
    Filed: November 25, 2014
    Date of Patent: May 9, 2017
    Assignee: ULSee Inc.
    Inventors: Zhou Ye, Sheng-Wen Jeng, Ying-Ko Lu, Shih Wei Liu
  • Patent number: 9530067
    Abstract: Method for a wearable device worn by a first user to generate or retrieve a personal contact record when encountering a second user is disclosed. The method of generating the personal contact record includes capturing a facial photograph of the second user and generating a face information; capturing a card photograph of a business card and performing OCR to obtain a card information; retrieving time and location information to obtain an encounter information; and generating the personal contact information. The method of retrieving the personal contact record includes capturing a facial photograph of the second user; searching through a contact database comprising facial images associated with identities of persons, and attempting to match the captured facial photograph with one of the facial images in the contact database to determine the identity of the second user, and providing messages. Wearable devices for performing the above methods are also disclosed.
    Type: Grant
    Filed: November 20, 2013
    Date of Patent: December 27, 2016
    Assignee: ULSee Inc.
    Inventor: Zhou Ye
  • Publication number: 20160035133
    Abstract: Method to create try-on experience wearing virtual 3D eyeglasses is provided using 2D image data of eyeglasses. Virtual 3D eyeglasses are constructed using set of 2D images for eyeglasses. Virtual 3D eyeglasses is configured onto 3D face or head model and being simulated as being fittingly worn by the wearer. Each set of 2D images for eyeglasses includes a pair of 2D lens images, a frontal frame image, and at least one side frame image. Upon detection of a movement of the face and head of wearer in real-time, the 3D face or head model and the configuration and alignment of virtual 3D eyeglasses are modified or adjusted accordingly. Features such as trimming off of portion of the glasses frame, shadow creating and environment mapping are provided to the virtual 3D eyeglasses in response to translation, scaling, and posture changes made to the head and face of the wearer in real-time.
    Type: Application
    Filed: July 31, 2014
    Publication date: February 4, 2016
    Applicant: ULSee Inc.
    Inventors: Zhou Ye, Chih-Ming Chang, Ying-Ko Lu, Yi-Chia Hsu
  • Patent number: 9224248
    Abstract: Method of applying virtual makeup and producing makeover effects to 3D face model driven by facial tracking in real-time includes the steps: capturing static or live facial images of a user; performing facial tracking of facial image, and obtaining tracking points on captured facial image; and producing makeover effects according to tracking points in real time. Virtual makeup can be applied using virtual makeup input tool such as a user's finger sliding over touch panel screen, mouse cursor or an object passing through makeup-allowed area. Makeup-allowed area for producing makeover effects is defined by extracting feature points from facial tracking points and dividing makeup-allowed area into segments and layers; and defining and storing parameters of makeup-allowed area. Virtual visual effects including color series, alpha blending, and/or superposition are capable of being applied.
    Type: Grant
    Filed: August 8, 2013
    Date of Patent: December 29, 2015
    Assignee: ULSee Inc.
    Inventors: Zhou Ye, Ying-Ko Lu, Yi-Chia Hsu, Sheng-Wen Jeng, Hsin-Wei Hsiao
  • Patent number: 9182813
    Abstract: Image-based object tracking system includes at least a controller with two or more color clusters, an input button, a processing unit with a camera, an object tracking algorithm and a display. Camera is configured to capture images of the controller, the processing unit is connected to display to display processed image contents, the controller is directly interacting with displayed processed image content. The controller can have two or three color clusters located on a side surface thereof and two color clusters having concentric circular areas located at a top surface thereof, the color of the first color cluster can be the same as or different from the color of the third color cluster. An object tracking method with or without scale calibration is also provided, which includes color learning and color relearning, image capturing, separating and splitting of the controller and the background, object pairing procedure steps on the controller.
    Type: Grant
    Filed: November 28, 2013
    Date of Patent: November 10, 2015
    Assignee: ULSee Inc.
    Inventors: Zhou Ye, Sheng-Wen Jeng, Chih-Ming Chang, Hsin-Wei Hsiao, Yi-Chia Hsu, Ying-Ko Lu