Abstract: Various embodiments may involve obtaining an image of at least a section of a manufactured part; determining, based on executing a neural network on the image, that the manufactured part was not fabricated according to a specification for the manufactured part, wherein the neural network was trained to associate images of manufactured parts with corresponding indicators of specifications for the manufactured parts; and, in response to determining that the manufactured part was not fabricated according to the specification, generating an electronic alert indicating that the manufactured part was improperly fabricated.
Type:
Grant
Filed:
April 8, 2024
Date of Patent:
October 22, 2024
Assignees:
Sybridge Technologies U.S. Inc., The Board of Trustees of the University of Illinois
Inventors:
William Paul King, Sameh Tawfick, Miles Bimrose, Charles Wood, Davis McGregor
Abstract: A method of evaluating quality of a wafer or an apparatus of evaluating quality of a wafer may include: performing a copper-haze evaluation on a piece of wafer or a single crystal ingot; collecting copper-haze map data and a copper-haze evaluation score based on a result of the copper-haze evaluation; training an artificial intelligence model based on the copper-haze map data and the copper-haze evaluation score; and performing crystal defect evaluation on the piece of the wafer or the single crystal ingot using the learned artificial intelligence model that outputs the copper-haze evaluation score when the copper-haze map data is input.
Abstract: The inventive concept provides a method to determine whether a substrate treatment process is normal using a deep learning model. The method comprising receiving input on a substrate treatment process video, preprocessing the inputted video, using the deep learning model to study a preprocessed video, and determining whether the substrate treatment process is normal by comparing the trained model and a real time substrate treatment process video.
Type:
Grant
Filed:
September 14, 2021
Date of Patent:
October 8, 2024
Assignee:
SEMES CO., LTD.
Inventors:
Ohyeol Kwon, Soo Yeon Shin, Hyun Hoo Kim, Myung Chan Cho
Abstract: This application relates to a method for processing image data of a microbial culture medium to recognize colony forming unit (CFU). In one aspect, the method includes receiving, at a processor, captured image data of the microbial culture medium from a user device, and preprocessing, at the processor, the captured image data. The method may also include counting, at the processor, the number of CFUs included in the preprocessed image data to derive result data including the counted number of CFUs. The method may further include automatically inputting information included in the result data into a predetermined template to generate document data corresponding to the captured image data, and transmitting at least one of the result data or the document data to the user device.
Abstract: The embodiments of the present disclosure disclose a method for determining the direction of gaze. A specific implementation of the method includes: obtaining a face or eye image of a target subject, and establishing a feature extraction network; using an adversarial training method to optimize the feature extraction network, and implicitly removing the gaze-irrelevant features extracted by the feature extraction network, so that the feature extraction network extracts gaze-related features from the face or eye image to obtain the gaze-related features; determining the target gaze direction based on the gaze-related features. This implementation can separate the gaze-irrelevant features contained in the image features from the gaze-related features, so that the image features contain the gaze-related features, and that the accuracy and stability of the determined direction of gaze are further improved.
Abstract: In accordance with embodiments of this disclosure, a computational simulation platform comprises a computer-implemented method that includes: generating a mesh or meshless three-dimensional (3D) reconstruction of a vessel lumen and a surface of the vessel lumen based on invasive or non-invasive imaging; assigning material properties to the 3D reconstructed surface of the vessel lumen based on the invasive or non-invasive imaging; performing balloon pre-dilation, stenting and balloon post-dilation computational simulations with the 3D reconstructed vessel lumen and surface of the vessel lumen; and assessing stent and vessel morphometric and biomechanical measures based on the computational simulations.
Type:
Grant
Filed:
May 7, 2021
Date of Patent:
September 24, 2024
Assignee:
The Board of Regents of the University of Nebraska
Abstract: Methods, systems, and apparatus for an imaging-based MicroOrganoSphere drug assay. In one aspect, a method includes obtaining image data of a well plate comprising a plurality of MicroOrganoSpheres; in response to applying a machine learning model configured to identify instances of at least some of the plurality of MicroOrganoSpheres in the image data, obtaining (i) indications indicative of each instance of the MicroOrganoSpheres and (ii) attributes of each instance of the MicroOrganoSpheres; and normalizing, based on the indications and the attributes, a well-to-well variation in the well plate.
Type:
Grant
Filed:
January 22, 2024
Date of Patent:
September 24, 2024
Assignee:
Xilis, Inc.
Inventors:
Xiling Shen, Zhaohui Wang, William Quayle, Garrett Jenkinson
Abstract: Methods, systems, and apparatus for an imaging-based MicroOrganoSphere drug assay. In one aspect, a method includes obtaining image data of a well plate comprising a plurality of MicroOrganoSpheres; in response to applying a machine learning model configured to identify instances of at least some of the plurality of MicroOrganoSpheres in the image data, obtaining (i) indications indicative of each instance of the MicroOrganoSpheres and (ii) attributes of each instance of the MicroOrganoSpheres; and normalizing, based on the indications and the attributes, a well-to-well variation in the well plate.
Type:
Grant
Filed:
August 17, 2023
Date of Patent:
September 24, 2024
Assignee:
Xilis, Inc.
Inventors:
Xiling Shen, Zhaohui Wang, William Quayle, Garrett Jenkinson
Abstract: An input image acquisition unit acquires a plurality of input images in which a specific detection target is captured by a plurality of different modalities. A perturbed image acquisition unit acquires a plurality of perturbed images in which at least one of the plurality of input images is perturbed. A detection processing unit detects a detection target included in the input images using each of the plurality of perturbed images and one of the plurality of input images that has not been perturbed, and acquires, for each of the plurality of perturbed images, a detection position of the detection target and a detection confidence level as detection results. An adjustment unit calculates, based on the detection positions and the confidence levels acquired for the plurality of perturbed images, an adjusted confidence level for each of the perturbed images using integrated parameters.
Abstract: A cabin monitoring and situation understanding perceiving method is proposed. A cabin interior image capturing step is performed to capture a cabin interior image. A generative adversarial network model creating step is performed to create a generative adversarial network model according to the cabin interior image. An image adjusting step is performed to adjust the cabin interior image to generate an approximate image. A cabin interior monitoring step is performed to process the approximate image to generate a facial recognizing result and a human pose estimating result. A cabin exterior image and voice capturing step is performed to capture a cabin exterior image and a voice information. A situation understanding perceiving step is performed to process at least one of the approximate image, the cabin exterior image and the voice information according to a situation understanding model to perceive a situation understanding result.
Abstract: Various user-presence/absence detection techniques based on deep learning are provided. These user-presence/absence detection techniques can include building/training a deep-learning model including a user-presence/absence classifier based on training images of a user-seating area of a surgeon console under various clinically-relevant conditions. The trained user-presence/absence classifier can then be used during teleoperation/surgical procedures to monitor/track users in the user-seating area of the surgeon console, and continuously classify captured real-time video images of the user-seating area into either a user-presence classification or a user-absence classification. In some embodiments, the user-presence/absence classifier can be used to detect a user-switching event at the surgeon console when a second user is detected to have entered the user-seating area after a first user is detected to have exited the user-seating area.
Abstract: A field extraction system that does not require field-level annotations for training is provided. Specifically, the training process is bootstrapped by mining pseudo-labels from unlabeled forms using simple rules. Then, a transformer-based structure is used to model interactions between text tokens in the input form and predict a field tag for each token accordingly. The pseudo-labels are used to supervise the transformer training. As the pseudo-labels are noisy, a refinement module that contains a sequence of branches is used to refine the pseudo-labels. Each of the refinement branches conducts field tagging and generates refined labels. At each stage, a branch is optimized by the labels ensembled from all previous branches to reduce label noise.
Abstract: A method for generating an optical marker for image processing/photogrammetry/motion detection using an output unit and/or a control and/or regulation unit. The optical marker is output/generated in such a way that the represented optical marker is formed by a regular pattern of angular structures and by substructures, each of which is situated completely within one of the structures. In each case, at least two directly adjacent structures, viewed in at least two mutually perpendicularly oriented directions along a projection plane of the optical marker, have different colors, and a color sequence of the plurality of structures periodically repeats along the two directions. The optical marker is formed from unique minimum recognition areas within the optical marker. The optical marker is output/generated in such a way that the substructures each include an imaging surface that corresponds to at least 15% of a maximum projection surface spanned by one of the structures.
Abstract: Surgical planning systems that automatically identify one or a plurality of different candidate trajectories to a defined intrabody treatment region. The systems can rank the identified candidate trajectories in an order of hierarchy based on defined parameters such as distance from a critical no-go location and whether a single or multiple different candidate trajectories are needed to provide coverage of the defined intrabody treatment region. The surgical planning systems are also configured to provide a User Interface that defines a workflow for an image-guided surgical procedure and allows a user to select one or more of the identified candidate trajectories steps in the workflow.
Type:
Grant
Filed:
April 16, 2021
Date of Patent:
September 10, 2024
Assignee:
ClearPoint Neuro, Inc.
Inventors:
Timothy Neil Orr, Philip Bradley Hotte, Christian Richard Osswald
Abstract: A method for capturing digital data for fabricating a dental splint involves displaying a GUI on a display of a smartphone that provides an alignment feature for a user to align a camera of the smartphone to a first position that captures teeth of a person, receiving digital video of the teeth, overlaying the alignment feature on the digital video of the teeth on the display to show alignment between the teeth and the alignment feature, capturing digital image information of the teeth while the alignment feature overlaps with the teeth, the captured digital image information including depth information, and transmitting the captured digital image information, including the depth information, from the smartphone for use in fabricating a dental splint.
Abstract: A method for performing de-identified location analytics is provided. The method may include generating a de-identified image by removing the identifiable characteristics from a building arrangement image; generating a binary threshold image by processing the de-identified image using a threshold process; generating a segmentation image by removing any segmentation objects of the binary threshold image with an area less than a defined pixel area; extracting room definitions from the segmentation image, wherein each room definition comprises a series of pixels corresponding to an outline of one of the rooms of the building arrangement image; generating a pixilation table, the pixilation table comprising room definition entries corresponding to the room definitions of the segmentation image, wherein each pixel of the segmentation image is mapped to the room definition entry corresponding to the room definition of the outline surrounding the pixel; and assigning a label to each room definition entry.
Abstract: Techniques for localizing a vehicle include obtaining an image from a camera, identifying a set of image feature points in the image, obtaining an approximate location of the vehicle, determining a set of sub-volumes (SVs) of a map to access based on the approximate location, obtaining map feature points and associated map feature descriptors associated with the set of SVs, determining a set of candidate matches between the set of image feature points and the obtained map feature points, determining a set of potential poses of the camera from candidate matches from the set of candidate matches and an associated reprojection error estimated for remaining points to select a first pose of the set of potential poses having a lowest associated reprojection error, determining the first pose is within a threshold value of an expected vehicle location, and outputting a vehicle location based on the first pose.
Abstract: An image processing method and system, and a computer readable storage medium. The method comprises: obtaining an image to be processed (S101); performing grayscale processing on the image to be processed to obtain a grayscale image, and performing blurring processing on the image to be processed to obtain a first blurred image (S102); performing binarization processing on the image to be processed according to the grayscale image and the first blurred image to obtain a binarized image (S103); performing expansion processing on grayscale values of high-value pixel points in the binarized image to obtain an expanded image (S104); performing sharpening processing on the expanded image to obtain a sharp image (S105); adjusting the contrast of the sharp image to obtain a contrast image (S106); and using the grayscale image as a guided image to perform guided filter processing on the contrast image to obtain a target image (S107).
Abstract: A method and apparatus for selecting (i) an imaging angle with minimized foreshortening and/or overlap of a target region from an existing angiographic image and/or (ii) selecting an imaging angle for new images so that foreshortening and/or overlap are minimized. A viewing angle cost function is determined that defines optimal viewing angles at least with respect to minimizing foreshortening of the target region. Using the cost function, an image may be selected from among a set of images, which potentially does not match the optimal imaging angle due to the optimal imaging angle having a high cost as a result of overlapping vascular features. The selected image may have an imaging angle that corresponds to a lower cost due to less overlap compared to the optimal imaging angle.