Abstract: Embodiments disclose apparatus, methods and software for performing biological screening and analysis implemented using an instrument platform capable of detecting a wide variety of cell-based secretions, expressed proteins, and other cellular components. The platform may be configured for simultaneous multiplexed detection of a plurality biological components such that a large number of discrete samples may be individually sequestered and evaluated to detect or identify constituents from the samples in a highly parallelized and scalable manner.
Type:
Grant
Filed:
June 10, 2021
Date of Patent:
May 30, 2023
Assignee:
IsoPlexis Corporation
Inventors:
Patrick Paczkowski, Sean Mackay, Sean McCusker
Abstract: Techniques for distinguishing objects (e.g., an individual or an individual pushing a shopping cart) are disclosed. An object is detected in images of a scene. A height map is generated from the images, and the object is represented as height values in the height map. Based on height properties associated with another object, it is determined whether the other object is associated with the object. If so determined, the objects are classified separately.
Type:
Grant
Filed:
January 15, 2021
Date of Patent:
May 23, 2023
Assignee:
Sensormatic Electronics, LLC
Inventors:
Zhiqian Wang, Edward A. Marcheselli, Gary Dispensa, Thomas D. Stemen, William C. Kastilahn
Abstract: A mobile device can generate real-time complex visual image effects using asynchronous processing pipeline. A first pipeline applies a complex image process, such as a neural network, to keyframes of a live image sequence. A second pipeline generates flow maps that describe feature transformations in the image sequence. The flow maps can be used to process non-keyframes on the fly. The processed keyframes and non-keyframes can be used to display a complex visual effect on the mobile device in real-time or near real-time.
Type:
Grant
Filed:
January 22, 2021
Date of Patent:
May 9, 2023
Assignee:
Snap Inc.
Inventors:
Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang, Shah Tanmay Anilkumar
Abstract: Provided are an image difference-based method and system for tracking a transparent object, first by training a convolutional neural network based on sample data, to generate a transparent object detection model; then inputting visible light image data acquired in real time and infrared thermal image data acquired in real time to the transparent object detection model, to acquire a visible light image transparent pixel identifier and an infrared thermal image transparent pixel identifier; then calculating three-dimensional point cloud information of each pixel marked by the infrared thermal image transparent pixel identifier, in the infrared thermal image data, wherein the three-dimensional point cloud information is in a coordinate system of an infrared thermal imaging camera; wherein the infrared thermal imaging camera is a camera for acquiring the infrared thermal image data, thereby acquiring, according to the three-dimensional point cloud information, corresponding position coordinates of each pixel, whi
Abstract: A system for updating training data includes an interface and a processor. The interface is configured to receive a set of vehicle data. The set of vehicle data includes images and assigned labels associated with the images. The processor is configured to determine a set of training data and a set of test data from the set of vehicle data; train a model with the set of training data; determine a set of predicted labels for the set of vehicle data using the model; identify a set of potential mislabeled data using the set of predicted labels and the assigned labels; and determine an updated set of training data by relabeling the set of potential mislabeled data and replacing the set of potential mislabeled data with a relabeled set of data.
Abstract: A simultaneous localization and mapping device is provided. The device includes an image obtaining device configured to capture color images and depth images of a surrounding environment; an initial pose estimating device configured to estimate an initial pose based on the color images and the depth images; a map constructing device configured to construct a three-dimensional map based on the depth images and the color images; and a pose determining device configured to determine a final pose based on the initial pose and the three-dimensional map.
Type:
Grant
Filed:
November 10, 2020
Date of Patent:
April 25, 2023
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Tianhao Gao, Xiaolong Shi, Xiongfeng Peng, Kuan Ma, Hongseok Lee, Myungjae Jeon, Qiang Wang, Yuntae Kim, Zhihua Liu
Abstract: A landing tracking control method comprises the following contents: a tracking model training stage and an unmanned aerial vehicle real-time tracking stage. The landing tracking control method extracts a network Snet by using a lightweight feature and makes modification, so that an extraction speed of the feature is increased to better meet a real-time requirement. Weight allocation on the importance of channel information is carried out to differentiate effective features more purposefully and utilize the features, so that the tracking precision is improved. In order to improve a training effect of the network, a loss function of an RPN network is optimized, a regression precision of a target frame is measured by using CIOU, and meanwhile, calculation of classified loss function is adjusted according to CIOU, and a relation between a regression network and classification network is enhanced.
Abstract: A localization and tracking method, a localization and tracking platform, a head-mounted display system, and a computer-readable storage medium are provided. One or more images of odd frames and one or more images of even frames that are respectively collected with a preset first exposure duration and a preset second exposure duration by one or more tracking cameras arranged on a head-mounted display device are acquired, the one or more images of even frames at least containing blobs corresponding to multiple luminous bodies arranged on a gamepad; Degree of Freedom (DoF) information of the head-mounted display device is determined according to the one or more images of odd frames and attitude information of the head-mounted display device; and DoF information of the gamepad is determined according to the one or more images of even frames, attitude information of the gamepad, and the DoF information of the head-mounted display device.
Abstract: A method may include obtaining an image including a face. The method may further include determining at least one time domain feature related to the face in the image and at least one frequency domain information related to the face in the image. The method may further include evaluating the quality of the image based on the at least one time domain feature and the frequency domain information.
Type:
Grant
Filed:
September 8, 2021
Date of Patent:
March 28, 2023
Assignee:
ZHEJIANG DAHUA TECHNOLOGY CO., LTD.
Inventors:
Siyu Guo, Haiyang Wang, Jingsong Hao, Gang Wang
Abstract: A soil imaging system having a work layer sensor disposed on an agricultural implement to generate an electromagnetic field through a soil area of interest as the agricultural implement traverses a field. A monitor in communication with the work layer sensor is adapted to generate a work layer image of the soil layer of interest based on the generated electromagnetic field. The work layer sensor may also generate a reference image by generating an electromagnetic field through undisturbed soil. The monitor may compare at least one characteristic of the reference image with at least one characteristic of the work layer image to generate a characterized image of the work layer of interest. The monitor may display operator feedback and may effect operational control of the agricultural implement based on the characterized image.
Abstract: A pose tracking method and apparatus are disclosed. The pose tracking method includes obtaining an image of a trackable target having a plurality of markers, detecting first points in the obtained image to which the markers are projected, matching the first points and second points corresponding to positions of the markers in a coordinate system set based on the trackable target based on rotation information of the trackable target, and estimating a pose of the trackable target based on matching pairs of the first points and the second points.
Abstract: A device include on-board memory, an applications processor, a digital signal processor (DSP) cluster, a configurable accelerator framework (CAF), and at least one communication bus architecture. The communication bus communicatively couples the applications processor, the DSP cluster, and the CAF to the on-board memory. The CAF includes a reconfigurable stream switch and data volume sculpting circuitry, which has an input and an output coupled to the reconfigurable stream switch. The data volume sculpting circuitry receives a series of frames, each frame formed as a two dimensional (2D) data structure, and determines a first dimension and a second dimension of each frame of the series of frames. Based on the first and second dimensions, the data volume sculpting circuitry determines for each frame a position and a size of a region-of-interest to be extracted from the respective frame, and extracts from each frame, data in the frame that is within the region-of-interest.
Type:
Grant
Filed:
March 5, 2021
Date of Patent:
March 21, 2023
Assignees:
STMICROELECTRONICS S.r.l., STMICROELECTRONICS INTERNATIONAL N.V.
Inventors:
Surinder Pal Singh, Thomas Boesch, Giuseppe Desoli
Abstract: A control device mounted in a first mobile body that includes a camera and an antenna includes: the camera configured to operate such that a direction of an optical axis of the camera and an oriented direction of the antenna are linked to each other; an identification unit configured to identify a second mobile body through image recognition from a captured image captured by the camera operating; and an antenna control unit configured to control the oriented direction of the antenna such that a position of the identified second mobile body in the captured image is a predetermined position.
Type:
Grant
Filed:
June 5, 2019
Date of Patent:
March 14, 2023
Assignee:
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Abstract: A method for estimating a displacement sequence of an object. The method includes mounting an optical marker on the object, exciting a plurality of optical sources of the optical marker, capturing a plurality of images, and extracting the displacement sequence from a first image of the plurality of images and a second image of the plurality of images. The plurality of optical sources are excited utilizing one or more processors. The plurality of optical sources are excited based on an excitation pattern. The plurality of images are captured utilizing an imaging device. The displacement sequence is extracted utilizing the one or more processors. The displacement sequence is associated with the excitation pattern.
Type:
Grant
Filed:
October 5, 2020
Date of Patent:
March 14, 2023
Assignee:
INTERNATIONAL INSTITUTE OF EARTHQUAKE ENGINEERING AND SEISMOLOGYx
Inventors:
Hossein Jahankhah, Mohammad Ali Goudarzi, Mohammad Mahdi Kabiri
Abstract: A first 2D recording of a specified reference view of an object is captured by a camera and, starting from the first 2D recording, a user's starting location relative to the object is ascertained by a computer vision module. Starting from the origin of a coordinate system as the starting location of the camera, one or more specified and/or settable relative positions in the vicinity of the object and/or in the object are determined as one or more locations for the respective perspective of the camera for taking at least one second 2D recording. The respective location in an object view on a display of the camera is displayed by a respective first augmented reality marker on the ground and/or on the object. The alignment of the camera with regard to angle and rotation with the perspective corresponding to the respective location is performed in this case by second augmented reality markers as auxiliary elements.
Abstract: A computer implemented method includes capturing images of an environment via a camera, detecting image features in the environment based on the captured images, the image features including at least one web feature derived from a fiducial web formed of a collection of non-repeating intersecting lines applied to an object in the environment, and estimating a camera pose based on the detected image features including the at least one web feature.
Type:
Grant
Filed:
January 29, 2021
Date of Patent:
March 7, 2023
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Joseph Michael Degol, Brent Armen Ellwein, Yashar Bahman
Abstract: A wearable or a mobile device includes a camera to capture an image of a scene with an unknown object. Execution of programming by a processor configures the device to perform functions, including a function to capture, via the camera, the image of the scene with the unknown object. To create lightweight human-machine user interactions, execution of programming by the processor further configures the device to determine a recognized object-based adjustment; and produce visible output to the user via the graphical user interface presented on the image display of the device based on the recognized object-based adjustment. Examples of recognized object-based adjustments include launch, hide, or display of an application for the user to interact with or utilize; display of a menu of applications related to the recognized object for execution; or enable or disable of a system level feature.
Abstract: The present disclosure provides a perception system for a vehicle. The perception system includes a perception filter for determining a region of interest (“ROI”) for the vehicle based on an intent of the vehicle and a current state of the vehicle; and a perception module for perceiving an environment of the vehicle based on the ROI; wherein the vehicle is caused to take appropriate action based on the perceived environment, the current state of the vehicle, and the intent of the vehicle.
Abstract: A mobile computing device receives an image from a camera physically located within a vehicle. The mobile computing device inputs the image into a convolutional model that generates a set of object detections and a set of segmented environment blocks in the image. The convolutional model includes subsets of encoding and decoding layers, as well as parameters associated with the layers. The convolutional model relates the image and parameters to the sets of object detections and segmented environment blocks. A server that stores object detections and segmented environment blocks is updated with the sets of object detections and segmented environment blocks detected in the image.
Abstract: A method for quality inspection of laser material processing includes performing laser material processing on a workpiece and generating, by a sensor, raw image data of secondary emissions during the laser material processing of the workpiece. The method also includes determining a quality of the laser material processing by analyzing the raw image data of the secondary emissions.
Type:
Grant
Filed:
May 11, 2020
Date of Patent:
February 14, 2023
Assignee:
The Boeing Company
Inventors:
Matthew Carl Johnson, Jessica Adele Boze