Patents Examined by Dustin Bilodeau
-
Patent number: 12236569Abstract: A method for welding a workpiece with a vision guided welding platform. The welding platform comprises a welding tool, and a camera for guiding the movement of the welding tool from a start point to an end point. The method includes the steps of adjusting a focal length of the camera such that a focal plane of the camera is located on a surface of the workpiece and obtaining a surface image of the workpiece. The method further includes the steps of determining a current focal length of the camera, determining a corrected pixel length of a pixel in the surface image and determining the number of pixels between the start point and the end point of each movement of the welding tool. Using the corrected pixel length, a distance between the start and end points is determined and the welding tool is guided to move therebetween.Type: GrantFiled: January 21, 2022Date of Patent: February 25, 2025Assignees: TE Connectivity Solutions GmbH, Tyco Electronics (Shanghai) Co., Ltd.Inventors: Zongjie (Jason) Tao, Dandan (Emily) Zhang, Roberto Francisco-Yi Lu
-
Patent number: 12229932Abstract: A method of generating a defect image for deep learning and a system therefor are provided. The method and the system are intended to be used in generating training data for an artificial intelligence algorithm. More specifically, the training data are defect images required to train an algorithm that identifies a defect from a product.Type: GrantFiled: September 16, 2021Date of Patent: February 18, 2025Assignee: DOOSAN ENERBILITY CO., LTD.Inventors: Jung Min Lee, Jung Moon Kim
-
Patent number: 12223017Abstract: A method may include capturing image data associated with an object in a defined environment at one or more points in time. The method may include capturing radar data associated with the object in the defined environment at the same points in time. The method may include obtaining, by a machine learning model, the image data and the radar data associated with the object in the defined environment. The method may include pairing each image datum with a corresponding radar datum based on a chronological occurrence of the image data and the radar data. The method may include generating, by the machine learning model, a three-dimensional motion representation associated with the object that is associated with the image data and the radar data.Type: GrantFiled: August 27, 2021Date of Patent: February 11, 2025Assignee: Rapsodo Pte. Ltd.Inventors: Batuhan Okur, Roshan Gopalakrishnan
-
Patent number: 12188845Abstract: A drive-through vehicle inspection system with a method for acquiring information from markings on tire sidewall surfaces of a moving vehicle. As the vehicle passes through the inspection system, sets of colored light sources, disposed at different relative orientations on opposite lateral sides of the vehicle, illuminate each passing wheel, enabling optical imaging systems associated with the opposite lateral sides of the inspection lane to acquire color images of the illuminated tire sidewall surfaces. Acquired color images are passed to a processing system and separated into individual red, green, and blue color channels for image processing. The processed output from each color channel is recombined by the processing system into a synthesized grayscale image highlighting and emphasizing markings present on the tire sidewall surfaces for evaluated by an OCR algorithm to retrieve tire identifying information.Type: GrantFiled: December 15, 2021Date of Patent: January 7, 2025Assignee: Hunter Engineering CompanyInventor: David A. Voeller
-
Patent number: 12165329Abstract: A system and method for unsupervised superpixel-driven instance segmentation of a remote sensing image are provided. The remote sensing image is divided into one or more image patches. The one or more image patches are processed to generate one or more superpixel aggregation patches based on a graph-based aggregation model, respectively. The graph-based aggregation model is configured to learn at least one of a spatial affinity or a feature affinity of a plurality of superpixels from each image patch and aggregate the plurality of superpixels based on the at least one of the spatial affinity or the feature affinity of the plurality of superpixels. The one or more superpixel aggregation patches are combined into an instance segmentation image.Type: GrantFiled: February 8, 2022Date of Patent: December 10, 2024Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTDInventors: Zhicheng Yang, Hang Zhou, Jui-Hsin Lai, Mei Han
-
Patent number: 12156752Abstract: Various methods and systems are provided for computed tomography imaging. In one embodiment, a method includes acquiring, with an x-ray detector and an x-ray source coupled to a gantry, a three-dimensional image volume of a subject while the subject moves through a bore of the gantry and the gantry rotates the x-ray detector and the x-ray source around the subject, inputting the three-dimensional image volume to a trained deep neural network to generate a corrected three-dimensional image volume with a reduction in aliasing artifacts present in the three-dimensional image volume, and outputting the corrected three-dimensional image volume. In this way, aliasing artifacts caused by sub-sampling may be removed from computed tomography images while preserving details, texture, and sharpness in the computed tomography images.Type: GrantFiled: August 11, 2021Date of Patent: December 3, 2024Assignee: GE PRECISION HEALTHCARE LLCInventors: Rajesh Langoju, Utkarsh Agrawal, Risa Shigemasa, Bipul Das, Yasuhiro Imai, Jiang Hsieh
-
Patent number: 12112499Abstract: A system and method for identifying a box to be picked up by a robot from a stack of boxes. The method includes obtaining a 2D red-green-blue (RGB) color image of the boxes and a 2D depth map image of the boxes using a 3D camera. The method employs an image segmentation process that uses a simplified mask R-CNN executable by a central processing unit (CPU) to predict which pixels in the RGB image are associated with each box, where the pixels associated with each box are assigned a unique label that combine to define a mask for the box. The method then identifies a location for picking up the box using the segmentation image.Type: GrantFiled: November 30, 2021Date of Patent: October 8, 2024Assignee: FANUC CORPORATIONInventors: Te Tang, Tetsuaki Kato
-
Patent number: 12100209Abstract: An image analysis method, including: obtaining influencing factors of t frames of images, where the influencing factors include self-owned features of h target subjects in each of the t frames of images and relational vector features between the h target subjects in each of the t frames of images, self-owned features of each target subject include a location feature, an attribute feature, and a posture feature, and t and h are natural numbers greater than 1; and obtaining a panoramic semantic description based on the influencing factors, where the panoramic semantic description includes a description of relationships between target subjects, relationships between actions of the target subjects and the target subjects, and relationships between the actions of the target subjects.Type: GrantFiled: July 1, 2021Date of Patent: September 24, 2024Assignee: HUAWEI CLOUD COMPUTING TECHNOLOGIES CO., LTD.Inventors: Pengpeng Zheng, Jiahao Li, Xin Jin, Dandan Tu
-
Patent number: 12086970Abstract: A computer vision method performed by a computing system includes: receiving an image; identifying a light intensity value for each pixel of a set of pixels of the image; defining a light intensity band formed by an upper light intensity threshold and a lower light intensity threshold based on a light intensity distribution of the light intensity values of the set of pixels; for each pixel of the set of pixels, identifying whether that pixel has a light intensity value that is within the light intensity band or outside of the intensity band; and generating a modified image by increasing a light intensity contrast between a first subset of pixels identified as having light intensity values within the light intensity band and a second subset of pixels identified as having light intensity values outside of the light intensity band.Type: GrantFiled: September 24, 2021Date of Patent: September 10, 2024Assignee: The Boeing CompanyInventors: Caleb G. Price, Jeffrey H. Hunt
-
Patent number: 12073548Abstract: A machine vision based automatic needle cannula inspection system includes an inspection and control unit, image capture devices, light sources, a unit that makes the needle cannula and the image capture device(s) rotate relative to each other, and a rejected part removal unit. By means of rotating the needle cannula and image capture devices relatively, a plurality of images captured along the circumferential direction of the needle cannula are directly saved to a computer, the images are then screened, processed and analyzed to fulfill the automatic inspection of multiple quality and technical parameters of the needle cannula without the need to position the bevel area of cannula tip to a specific direction. Inspection parameters and accuracy can be set at any time, the system can automatically record classification and statistics of passed and rejected needle cannulas for query, and the rejected cannula are removed automatically at next position.Type: GrantFiled: December 16, 2020Date of Patent: August 27, 2024Assignee: JB MEDICAL, INC.Inventor: Jibin Yang
-
Patent number: 12020397Abstract: A rectangle creating unit creates a rectangle circumscribing a lesion area in a medical image. A division-number-ratio calculating unit calculates a division-number ratio based on an image aspect ratio of an input image to be input to a device that identifies a lesion and on a rectangle aspect ratio between the length in the vertical direction of and the length in the horizontal direction of the rectangle. A multiplying-factor calculating unit calculates, based on the division-number ratio, a resizing multiplying-factor for each of the vertical direction and the horizontal direction of a rectangular image encircled by the rectangle and including the lesion area. A resizing unit resizes the rectangular image with the resizing multiplying-factor. A dividing unit divides the resized rectangular image into one or more images in such a manner that the size of each divided image matches the size of the input image.Type: GrantFiled: January 10, 2020Date of Patent: June 25, 2024Assignee: NEC CORPORATIONInventor: Ryosaku Shino
-
Patent number: 12020436Abstract: The present invention provides an unsupervised domain adaptive segmentation network comprises a feature extractor configured for extracting features from a 3D MRI scan image; a decorrelation and whitening module configured for preforming decorrelation and whitening transformation on the extracted features to obtain whitened features; a domain-specific feature translation module configured for translating domain-specific features from a source domain into a target domain for adapting the unsupervised domain adaptive network to the target domain; and a classifier configured for projecting the whitened features into a zonal segmentation prediction. By implementing the domain-specific feature translation module for transferring the knowledge learned from the labeled source domain data to unlabeled target domain data, domain gap between the source and target data can be narrowed.Type: GrantFiled: December 6, 2021Date of Patent: June 25, 2024Assignee: City University of Hong KongInventors: Yixuan Yuan, Xiaoqing Guo
-
Patent number: 12014446Abstract: A computing system for generating predicted images along a trajectory of unseen viewpoints. The system can obtain one or more spatial observations of an environment that may be captured from one or more previous camera poses. The system can generate a three-dimensional point cloud for the environment from the one or more spatial observations and the one or more previous camera poses. The system can project the three-dimensional point cloud into two-dimensional space to form one or more guidance spatial observations. The system can process the one or more guidance spatial observations with a machine-learned spatial observation prediction model to generate one or more predicted spatial observations. The system can process the one or more predicted spatial observations and image data with a machine-learned image prediction model to generate one or more predicted images from the target camera pose. The system can output the one or more predicted images.Type: GrantFiled: August 23, 2021Date of Patent: June 18, 2024Assignee: GOOGLE LLCInventors: Jing Yu Koh, Honglak Lee, Yinfei Yang, Jason Michael Baldridge, Peter James Anderson
-
Patent number: 11989977Abstract: A system and method for authoring and implementing context-aware applications (CAPs) are disclosed. The system and method enables users to record their daily activities and then build and deploy customized CAPs onto augmented reality platforms in which automated actions are performed in response to user-defined human actions. The system and method utilizes an integrated augmented reality platform composed of multiple camera systems, which allows for non-intrusive recording of end-users' activities and context detection while authoring and implementing CAPs. The system and method provides an augmented reality authoring interface for browsing, selecting, and editing recorded activities, and creating flexible CAPs through spatial interaction and visual programming.Type: GrantFiled: June 30, 2021Date of Patent: May 21, 2024Assignee: Purdue Research FoundationInventors: Karthik Ramani, Tianyi Wang, Xun Qian
-
Patent number: 11983007Abstract: A control system for positioning a marker over a pre-existing roadway surface mark. The control system has one or more imagers having a field of view for imaging an area of roadway mark surface encompassing the roadway mark and a computer having a machine learning network to process the roadway mark image and position the marker over the pre-existing roadway mark.Type: GrantFiled: April 14, 2021Date of Patent: May 14, 2024Assignee: LIMNTECH LLCInventors: Douglas D. Dolinar, William R. Haller, Charles R. Drazba, Matt W. Smith, Kyle J. Leonard, Eric M. Stahl
-
Patent number: 11983627Abstract: A joint training network including a multi-heard module comprising a network input, a feature network coupled to the network input and including a feature detector decoder outputting interest points and a descriptor generator decoder outputting descriptors, the feature detector decoder and the descriptor generator decoder coupled in parallel, a depth network including a monocular depth prediction decoder and outputting a depth map, a flow network including an image segmentation decoder and outputting a segmented image, a segmentation network including a warping module outputting a rotation and translation and an input warp signal to a segmentation decoder outputting a residual flow and a pose network including a fully connected pose estimator coupled to an adder that receives input from the pose estimator and the residual flow from the segmentation decoder, the adder outputting an optical flow.Type: GrantFiled: May 6, 2021Date of Patent: May 14, 2024Assignee: Black Sesame Technologies Inc.Inventor: Yu Huang
-
Patent number: 11899710Abstract: An image recognition method, electronic device, and storage medium are provided and relate to the fields of artificial intelligence, computer vision, deep learning, image processing, and the like. The method includes: performing joint training on a first sub-network configured for recognition processing and a second sub-network configured for retrieval processing in a classification network by adopting an identical set of training data to obtain a trained target classification network, wherein, the first sub-network and the second sub-network are twin networks that are consistent in network structures and share a set of weights; and inputting image data to be recognized into the target classification network to obtain a recognition result. By adopting the method, the accuracy of the image recognition may be improved.Type: GrantFiled: June 28, 2021Date of Patent: February 13, 2024Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.Inventor: Min Yang