Patents Issued in April 30, 2020
  • Publication number: 20200134818
    Abstract: In one embodiment, a medical image processing apparatus includes: processing circuitry configured to acquire three-dimensional image data including a liver of a donor, extract a region of the liver and vessels in the liver from the three-dimensional image data, and determine, in the extracted region of the liver, a cross section of the liver in such a manner that volume of the liver to be resected from the donor satisfies a predetermined matching condition of transplantation and number of vessels on the cross section becomes smaller.
    Type: Application
    Filed: October 22, 2019
    Publication date: April 30, 2020
    Applicant: CANON MEDICAL SYSTEMS CORPORATION
    Inventors: Chihiro HATTORI, Kota Aoyagi
  • Publication number: 20200134819
    Abstract: Provided are an information processing apparatus, an information processing method, and a program capable of accurately extracting a target region from a medical image. An information processing apparatus includes an extraction unit that extracts information indicating a physique of a subject from an image acquired by imaging the subject, a specification unit that specifies a group into which the subject is classified by using the information indicating the physique of the subject extracted by the extraction unit, and a generation unit that generates a learned model for each group through machine learning using, as learning data, image data indicating a medical image acquired by imaging the subject for each group and information indicating a region extracted from the medical image.
    Type: Application
    Filed: October 22, 2019
    Publication date: April 30, 2020
    Applicant: FUJIFILM Corporation
    Inventor: Kenta YAMADA
  • Publication number: 20200134820
    Abstract: In a computer implemented method of determining a boundary of a tumor region or other diseased tissue, hyper- or multispectral image data of a tissue sample including a tumor region or other diseased tissue is taken. The analysis includes a morphological analysis and a spectral analysis of the hyper- or multispectral image data resulting in a morphological tumor boundary and a spectral tumor boundary or a morphological diseased tissue boundary and a spectral diseased tissue boundary. These two boundaries are combined resulting in a combined tumor boundary or combined diseased tissue boundary, wherein an indication of reliability of the combined tumor boundary or combined diseased tissue boundary is given.
    Type: Application
    Filed: October 23, 2019
    Publication date: April 30, 2020
    Inventors: BERNARDUS HENDRIKUS WILHELMUS HENDRIKS, HONG LIU, CAIFENG SHAN
  • Publication number: 20200134821
    Abstract: A convolutional neural network (CNN) and associated method for identifying basal cell carcinoma are disclosed. The CNN comprises two convolution layers, two pooling layers and at least one fully-connected layer. The first convolution layer uses initial Gabor filters that model the kernel parameters setting in advance based on human professional knowledge. The method uses collagen fiber images for training images and converts doctors' knowledge to initiate the Gabor filters as featuring computerization. The invention provides better training performance in terms of training time consumption and training material overhead.
    Type: Application
    Filed: October 26, 2018
    Publication date: April 30, 2020
    Inventors: Gwo Giun Lee, Zheng-Han Yu, Shih-Yu Chen
  • Publication number: 20200134822
    Abstract: The present invention provides a method for classifying biological tissues with high precision compared to a conventional method. When measuring a spectrum which has a two-dimensional distribution that is correlated with a slice of a biological tissue, and acquiring a biological tissue image from the two-dimensional measured spectrum, the method includes dividing an image region into a plurality of small blocks, and then reconstructing the biological tissue image by using the measured spectrum and a classifier corresponding to each of the regions.
    Type: Application
    Filed: December 27, 2019
    Publication date: April 30, 2020
    Inventor: Koichi Tanji
  • Publication number: 20200134823
    Abstract: An image processing apparatus supports detection of pathological change over time in portion of a captured region commonly included in a first image and a second image, the first image and the second image acquired by capturing a subject at different respective times. The image processing apparatus includes an acquisition unit configured to acquire a subtraction image of the first image and the second image representing the pathological change over time as a difference and a recording unit configured to record information indicating said portion using a name different from a name of the commonly included region in a storage unit in association with the subtraction image.
    Type: Application
    Filed: October 17, 2019
    Publication date: April 30, 2020
    Inventors: Yutaka Emoto, Yoshio Iizuka, Masahiro Yakami, Mizuho Nishio
  • Publication number: 20200134824
    Abstract: Smart metrology methods and apparatuses disclosed herein process images for automatic metrology of desired features. An example method at least includes extracting a region of interest from an image, the region including one or more boundaries between different sections, enhancing at least the extracted region of interest based on one or more filters, generating a multi-scale data set of the region of interest based on the enhanced region of interest, initializing a model of the region of interest; optimizing a plurality of active contours within the enhanced region of interest based on the model of the region of interest and further based on the multi-scale data set, the optimized plurality of active contours identifying the one or more boundaries within the region of interest, and performing metrology on the region of interest based on the identified boundaries.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventor: Umesh ADIGA
  • Publication number: 20200134825
    Abstract: Provided is a technology for extracting an image of a target plane from 2D or 3D image data acquired by a medical imaging apparatus with a small amount of computation and at high speed. A plane of a target plane including a predetermined structure is extracted from image data of a subject. A region of the predetermined structure included in the plane is detected by applying a learning model learned using learning data including a target plane for learning including an image of the structure and a region-of-interest plane for learning obtained by cutting out and enlarging a partial region including the structure in the target plane for learning to a plurality of planes obtained from the image data, and the plane of the target plane is extracted based on the detected region of the predetermined structure.
    Type: Application
    Filed: October 7, 2019
    Publication date: April 30, 2020
    Inventors: Yun LI, Takashi TOYOMURA, Kenta INOUE, Toshinori MAEDA
  • Publication number: 20200134826
    Abstract: An electronic apparatus which generates a display image, includes: terminals; and at least one processor and/or at least one circuit to perform the operations of the following units: obtaining unit configured to obtain second images each having a second number of pixels from the terminals, wherein the second images form a first image having a first number of pixels, setting unit configured to set a region of the first image corresponding to the display image based on user input, and generating unit configured to generate the display image on the basis of one of the second images in a case where the number of pixels in the region is greater than a threshold.
    Type: Application
    Filed: October 24, 2019
    Publication date: April 30, 2020
    Inventors: Ken YANAGIBASHI, Hirofumi Urabe, Takashi Kimura
  • Publication number: 20200134827
    Abstract: A method for real-time semantic image segmentation using a monocular event-based sensor includes capturing a scene using a red, green, blue (RGB) sensor to obtain a plurality of RGB frames and an event sensor to obtain event data corresponding to each of the plurality of RGB frames, performing object labeling for objects in a first RGB frame among the plurality of RGB frames by identifying one or more object classes, obtaining an event velocity of the scene by fusing the event data corresponding to the first RGB frame and at least one subsequent RGB frame among the plurality of RGB frames, determining whether the event velocity is greater than a predefined threshold, and performing object labeling for objects in the at least one subsequent RGB frame based on the determination.
    Type: Application
    Filed: October 28, 2019
    Publication date: April 30, 2020
    Inventors: Sujoy SAHA, Karthik SRINIVASAN, Sourabh Singh YADAV, Suhas Shantaraja PALASAMUDRAM, Venkat Ramana PEDDIGARI, Pradeep Kumar SK, Pranav P DESHPANDE, Akankshya KAR
  • Publication number: 20200134828
    Abstract: A system and method for operating a robotic system to register unrecognized objects is disclosed. The robotic system may use first image data representative of an unrecognized object located at a start location to derive an initial minimum viable region (MVR) and to implement operations for initially displacing the unrecognized object. The robotic system may analyze second image data representative of the unrecognized object after the initial displacement operations to detect a condition representative of an accuracy of the initial MVR. The robotic system may register the initial MVR or an adjustment thereof based on the detected condition.
    Type: Application
    Filed: October 29, 2019
    Publication date: April 30, 2020
    Inventors: Rosen Nikolaev Diankov, Russell Islam, Xutao Ye
  • Publication number: 20200134829
    Abstract: Methods and systems for implementing artificial intelligence enabled metrology are disclosed. An example method includes segmenting a first image of structure into one or more classes to form an at least partially segmented image, associating at least one class of the at least partially segmented image with a second image, and performing metrology on the second image based on the association with at least one class of the at least partially segmented image.
    Type: Application
    Filed: October 30, 2018
    Publication date: April 30, 2020
    Inventors: John FLANAGAN, Brad LARSON, Thomas MILLER
  • Publication number: 20200134830
    Abstract: The present disclosure relates to methods and systems for generating a verified minimum viable range (MVR) of an object. An exposed outer corner and exposed edges of an object may be identified by processing one or more image data. An initial MVR may be generated by identifying opposing parallel edges opposing the exposed edges. The initial MVR may be adjusted, and the adjusted result may be tested to generate a verified MVR.
    Type: Application
    Filed: October 29, 2019
    Publication date: April 30, 2020
    Inventors: Jinze Yu, Jose Jeronimo Moreira Rodrigues, Rosen Nikolaev Diankov
  • Publication number: 20200134831
    Abstract: A facility for identifying the boundaries of 3-dimensional structures in 3-dimensional images is described. For each of multiple 3-dimensional images, the facility receives results of a first attempt to identify boundaries of structures in the 3-dimensional image, and causes the results of the first attempt to be presented to a person. For each of a number of 3-dimensional images, the facility receives input generated by the person providing feedback on the results of the first attempt. The facility then uses the following to train a deep-learning network to identify boundaries of 3-dimensional structures in 3-dimensional images: at least a portion of the plurality of 3-dimensional images, at least a portion of the received results, and at least a portion of provided feedback.
    Type: Application
    Filed: October 30, 2019
    Publication date: April 30, 2020
    Inventors: Jianxu Chen, Liya Ding, Matheus Palhares Viana, Susanne Marie Rafelski
  • Publication number: 20200134832
    Abstract: Provided in accordance with the present disclosure are systems for identifying a position of target tissue relative to surgical tools using a structured light detector. An exemplary system includes antennas configured to interact with a marker placed proximate target tissue inside a patient's body, a structured light pattern source, a structured light detector, a display device, and a computing device configured to receive data from the antennas indicating interacting with the marker, determine a distance between the antennas and the marker, cause the structured light pattern source to project and detect a pattern onto the antennas. The instructions may further cause the computing device to determine, a pose of the antennas, determine, based on the determined distance between the antennas and the marker, and the determined pose of the antennas, a position of the marker relative to the antennas, and display the position of the marker relative to the antennas.
    Type: Application
    Filed: December 31, 2019
    Publication date: April 30, 2020
    Inventor: Joe D. Sartor
  • Publication number: 20200134833
    Abstract: An apparatus and method for encoding objects in a camera-captured image with a deep neural network pipeline including multiple convolutional neural networks or convolutional layers. After identifying at least a portion of the camera-capture image, a first convolutional layer is applied to the at least the portion of the camera-captured image and multiple subregion representations are pooled from the output of the first convolutional layer. One or more additional convolutions are performed. At least one deconvolution is performed and concatenated with the output of one or more convolutions. One or more final convolutions are performed. The at least the portion of the camera-captured image is classified as an object category in response to an output of the one or more final convolutions.
    Type: Application
    Filed: October 26, 2018
    Publication date: April 30, 2020
    Inventors: Souham Biswas, Sanjay Kumar Boddhu
  • Publication number: 20200134834
    Abstract: Systems and techniques for automatic object replacement in an image include receiving an original image and a preferred image. The original image is automatically segmented into an original image foreground region and an original image object region. The preferred image is automatically segmented into a preferred image foreground region and a preferred image object region. A composite image is automatically composed by replacing the original image object region with the preferred image object region such that the composite image includes the original image foreground region and the preferred image object region. An attribute of the composite image is automatically adjusted.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: I-Ming Pao, Sarah Aye Kong, Alan Lee Erickson, Kalyan Sunkavalli, Hyunghwan Byun
  • Publication number: 20200134835
    Abstract: A device, system, and method performs an image compression. The method includes receiving raw image data of an image and identifying objects in the image as one of a foreground object or a background object. The method includes generating first foreground image data for a first foreground object. The method includes generating first metadata for a first background object. The first metadata indicates a first identity and a first descriptive parameter for the first background object. The first descriptive parameter relates to how the first background object is situated in the image. The method includes generating first background image data for the first background object. The first background image data is empty data. The method includes storing processed image data for the image comprising the first foreground image data, the first metadata, and the first background image data.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: Maarten Koning, Mihai Dragusu
  • Publication number: 20200134836
    Abstract: A system for detecting inactive objects includes a thermal imaging device arranged to acquire a thermal signature of an active object; movement detection processor arranged to process the thermal signature of the active object to monitor for any motion of the active object and whenupon the thermal signature indicates that the motion of the active object is below a movement threshold, determine that the active object is inactive.
    Type: Application
    Filed: December 28, 2018
    Publication date: April 30, 2020
    Inventors: Hung Kwan Chen, Chi Hung Tong, Ka Man Ng, King Hong Paros Kwan
  • Publication number: 20200134837
    Abstract: Methods, apparatus, systems and articles of manufacture to improve efficiency of object tracking in video frames are disclosed. An example apparatus includes a clusterer to cluster a map of a video frame into blobs; a comparator to determine an intersection over union value between the blobs and bounding boxes in a second video frame; and an interface to initiate object detection by a neural network on the first video frame when the intersection over union does not satisfy a threshold.
    Type: Application
    Filed: December 19, 2019
    Publication date: April 30, 2020
    Inventors: Srenivas Varadarajan, Girish Srinivasa Murthy, Anand Bodas, Omesh Tickoo, Vallabhajosyula Somayazulu
  • Publication number: 20200134838
    Abstract: For each frame of a video, a determination is made whether an image of a hand exists in the frame. When at least one frame of the video includes the image of the hand, locations of the hand in the frames of the video are tracked to obtain a tracking result. A verification is performed to determine whether the tracking result is valid in a current frame of the frames of the video. When the tracking result is valid in the current frame of the video, a location of the hand is tracked in a next frame. When the tracking result is not valid in the current frame, localized hand image detection is performed on the current frame.
    Type: Application
    Filed: December 19, 2019
    Publication date: April 30, 2020
    Applicant: Alibaba Group Holding Limited
    Inventors: Zhijun Du, Nan Wang
  • Publication number: 20200134839
    Abstract: Method and system for tracking a moving object may be used in virtual or augmented reality systems, in-site logistics systems, robotics systems and control systems of unmanned moving objects. The method of tracking includes a step of automatic adjusting a tracking area by detection and registration of unique combinations of elementary optical patterns, and a step of tracking change in position and/or orientation of a moving object by detection of unique combinations of elementary optical patterns and comparison thereof with the unique combinations of elementary optical patterns registered during the tracking area adjustment. The system for tracking a moving object comprises at least one tracker located on a moving object the tracker including an optical sensor, at least one marker strip including active markers forming elementary optical patterns in a picture obtained from the optical sensor, and a central processing unit.
    Type: Application
    Filed: October 30, 2018
    Publication date: April 30, 2020
    Inventor: PETR VYACHESLAVOVICH SEVOSTIANOV
  • Publication number: 20200134840
    Abstract: In a moving image, as a motion section, a section including a plurality of consecutive frames related to a motion of a photographer of the moving image is specified. The ratio of frames in which a specific object is detected in the motion section is obtained. A motion section to be extracted as a highlight from among motion sections each specified from the moving image is determined, based on the ratio obtained for each of the motion sections.
    Type: Application
    Filed: October 29, 2019
    Publication date: April 30, 2020
    Inventor: Shinichi Mitsumoto
  • Publication number: 20200134841
    Abstract: An image processing apparatus detects a tracking target object in a moving image, executes tracking processing to track the tracking target object, determines an attribute of an object included in the moving image, specifies, when a first state in which the tracking target object is detected changes to a second state in which the tracking target object is not detected, an object, which is included in the moving image and is partially positioned in front of the tracking target object in the second state, based on a position of the tracking target object in the first state, and controls, based on the attribute of the specified object, the tracking processing performed on the tracking target object.
    Type: Application
    Filed: October 29, 2019
    Publication date: April 30, 2020
    Inventor: Takuya Toyoda
  • Publication number: 20200134842
    Abstract: The present invention relates to a method, system, and non-transitory computer-readable recording medium for calculating a motion trajectory of a subject. According to one aspect of the invention, there is provided a method for calculating a motion trajectory of a subject, the method comprising the steps of: acquiring at least three images of a subject using one imaging module; and calculating a motion trajectory of the subject with reference to the at least three acquired images, on the basis of each of at least three positions determined by a projection from a viewpoint of the imaging module to the subject on a background, and at least three virtual lines passing through a position where the imaging module is disposed.
    Type: Application
    Filed: October 29, 2019
    Publication date: April 30, 2020
    Applicant: CREATZ., INC.
    Inventors: Yong Ho SUK, Jey Ho SUK
  • Publication number: 20200134843
    Abstract: Detection, tracking and recognition on networks of digital neurosynaptic cores are provided. In various embodiments, an image sensor is configured to provide a time-series of frames. A first artificial neural network is operatively coupled to the image sensor and configured to detect a plurality of objects in the time-series of frames. A second artificial neural network is operatively coupled to the first artificial neural network and configured to classify objects detected by the first neural network and output a location and classification of said classified objects. The first and second artificial neural networks comprise one or more spike-based neurosynaptic cores.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: Alexander Andreopoulos, Arnon Amir, Tapan K. Nayak
  • Publication number: 20200134844
    Abstract: An approach is provided for determining a feature correspondence between image views. The approach, for example, involves retrieving a top down image for an area of interest and determining a ground level camera pose path for the area of interest. The approach also involves selecting a portion of the top down image that corresponds to a geographic area within a distance threshold from the ground level camera pose path, and then processing the portion of the top down image to identify a semantic feature. The approach further involves determining a subset of camera poses of the ground level pose path that is within a sphere of visibility of the semantic feature, and retrieving one or more ground level images captured with the subset of camera poses. The approach further involves determining the feature correspondence of the semantic feature between the top down image and the one or more ground level images.
    Type: Application
    Filed: October 26, 2018
    Publication date: April 30, 2020
    Inventors: Anish MITTAL, Zhanwei CHEN, Joseph KURIAN
  • Publication number: 20200134845
    Abstract: An image registration method, an image registration device, and a storage medium. The image registration method includes: causing a display device to display at least one spot array; obtaining a feature image, and performing a feature-based image registration operation on the feature image to obtain at least one transformed image; and obtaining a mapping model based on the at least one transformed image. The feature image is an image which is shown on the display device and displays the at least one spot array.
    Type: Application
    Filed: July 5, 2019
    Publication date: April 30, 2020
    Inventors: Yunqi WANG, Huidong HE, Minglei CHU, Lili CHEN, Hao ZHANG
  • Publication number: 20200134846
    Abstract: This method (100) for automatic propagation into an (N+1)-th dimension of an image segmentation initialized in dimension N, N?2, comprises the acquisition (102) of an ordered series of image representations of dimension N and an initial segmentation (104) in dimension N of a region of interest in the first and last image representations of the series, to obtain first and last initial segmentation masks (M0, Mn) of the region of interest. It further comprises an estimation (106) of registration parameters (La, Ld) between the first and last initial segmentation masks, and upward and downward automatic propagations (108) of the initial segmentation, from the first and last image representations, to all the other image representations of the series by step-by-step registration up to the last and first image representations.
    Type: Application
    Filed: July 10, 2018
    Publication date: April 30, 2020
    Applicants: Universite d'Aix Marseille, Centre National de la Recherche Scientifique
    Inventors: Arnaud LE TROTER, David BEN DAHAN, Augustin OGIER
  • Publication number: 20200134847
    Abstract: In various embodiments, techniques are provided for photogrammetric 3D model reconstruction that modify the optimization performed in bundle adjustment operations of an automatic SfM stage to apply a depth-aware weighting to reprojection error of each 3D point used in the optimization. The reprojection error of each 3D point may be weighted based on a function of distance, density of a cluster, or a combination of distance and density. A loss function may be scaled to account for the weighting, and normalizations applied. Such weighting may force consideration of 3D points on an object of interest in the foreground and improve convergence of the optimization to global optima. In such manner, accurate and complete 3D models may be reconstructed of even ill-textured or very thin objects in the foreground of a scene with a highly textured background, while not consuming excessive processing and storage resources or requiring tedious workflows.
    Type: Application
    Filed: January 3, 2019
    Publication date: April 30, 2020
    Inventor: Nicolas Gros
  • Publication number: 20200134848
    Abstract: An electronic device and method are herein disclosed. The electronic device includes a first camera with a first field of view (FOV), a second camera with a second FOV that is narrower than the first FOV, and a processor configured to capture a first image with the first camera, the first image having a union FOV, capture a second image with the second camera, determine an overlapping FOV between the first image and the second image, generate a disparity estimate based on the overlapping FOV, generate a union FOV disparity estimate, and merge the union FOV disparity estimate with the overlapping FOV disparity estimate.
    Type: Application
    Filed: March 26, 2019
    Publication date: April 30, 2020
    Inventors: Mostafa El-Khamy, Xianzhi Du, Haoyu Ren, Jungwon Lee
  • Publication number: 20200134849
    Abstract: A method for obtaining depth information from a scene is disclosed wherein the method has the steps of: a) acquiring a plurality of images of the scene by means of at least one camera during a time of shot wherein the plurality of images offer at least two different views of the scene; b) for each of the images of step a), simultaneously acquiring data about the position of the images referred to a six-axis reference system; c) selecting from the images of step b) at least two images; d) rectifying the images selected on step c) thereby generating a set of rectified images; and e) generating a depth map from the rectified images. Additionally devices for carrying out the method are disclosed.
    Type: Application
    Filed: February 6, 2017
    Publication date: April 30, 2020
    Inventors: Jorge Vicente BLASCO CLARET, Carles MONTOLIU ALVARO, Ivan VIRGILIO PERINO, Adolfo MARTINEZ USO
  • Publication number: 20200134850
    Abstract: An image selection device includes a processor configured to: input, for each of a series of images acquired from a camera mounted on a vehicle, the image to a classifier to detect a region including an object represented on the image; track the detected object over the series of images; and select, when a period in which the detected object can be tracked is equal to or more than a predetermined period, and a size of a region including the detected object in any one image during the period in which the object can be tracked is equal to or more than a predetermined size threshold value, among the series of images, an image immediately before the period in which the object can be tracked, or an image in which the tracked object is not represented during the period in which the object can be tracked.
    Type: Application
    Filed: October 21, 2019
    Publication date: April 30, 2020
    Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Satoshi TAKEYASU, Daisuke HASHIMOTO, Kota Hirano
  • Publication number: 20200134851
    Abstract: A location, dimension, and height of an object can be determined and measured using shadows. The object is located on a surface and an array of lights is mounted over the surface and shines on the object. The surface can be switchable between a translucent state and a transparent state. A colored shadow occurs based on the color of the light that shines on the object, where red, green, and blue are the typical colors used to provide shadows. A camera that is located below the surface captures an image of the shadows. The camera can be a color camera or a monochrome camera. The image is processed using thresholding to segment the different types of shadows that can occur. With the shadows, calculations can be made to determine the location, dimension, and height of the object.
    Type: Application
    Filed: October 25, 2018
    Publication date: April 30, 2020
    Inventor: Alexander M. McQueen
  • Publication number: 20200134852
    Abstract: The present disclosure provides a threat warning system that utilizes an image sensor and an active radar to determine whether a muzzle flash is the source of a threat to a platform carrying the threat warning system. The threat warning system may be a new operation of existing or legacy components on a platform that are implemented as a software update to accomplish the new objectives of the present disclosure. Alternatively, the components described herein may be provide as a newly constructed group of components networked together to accomplish the intended functions of the present disclosure.
    Type: Application
    Filed: October 30, 2018
    Publication date: April 30, 2020
    Inventor: Michael N. Mercier
  • Publication number: 20200134853
    Abstract: An approach is provided for rendering a distance marker in an image. The approach, for example, involves determining a plurality of camera characteristics of a camera used to capture the image. The plurality of camera characteristics, for instance, can include a camera field of view, a horizon offset, a camera mounting height, a camera mounting axis, or a combination thereof. The approach also involves determining a ground plane extending to a horizon depicted in the image, a camera position with respect to the ground plane, and an image plane based on the plurality of characteristics, wherein the image plane is orthogonal to the ground plane and intersects the ground plane at a designated distance from the camera position. The approach further involves projecting a ray from the camera position through the distance marker on the ground plane to a marker position on the image plane, and rendering the distance marker in the image based on the marker position on the image plane.
    Type: Application
    Filed: October 30, 2018
    Publication date: April 30, 2020
    Inventor: Mike MILICI
  • Publication number: 20200134854
    Abstract: A camera is arranged on a transmitter or receiver mount configured to provide a transmitter or receiver with a field of regard. Image data of the field of regard is captured by the camera. A location of an obscuration within the field of regard from the image data is determined from the image data. A map of obscurations within the field of regard is generated based upon the image data and the location of the obscuration within the field of regard.
    Type: Application
    Filed: October 25, 2018
    Publication date: April 30, 2020
    Inventors: Sean Snoke, John Baader, Mark R. Trandel, Jaime E. Ochoa
  • Publication number: 20200134855
    Abstract: Cameras capture time-stamped images of predefined areas. Individuals and items are tracked in the images. Occluded items detected in the images are preprocessed to remove pixels associated with occluded information and the pixels remaining associated with the items are cropped. The preprocessed and cropped images are provided to a trained machine-learning algorithm or a trained neural network trained to classify and identify the items. Output received from the trained neural network provides item identifiers for the items that are present in the original images.
    Type: Application
    Filed: October 30, 2018
    Publication date: April 30, 2020
    Inventors: Brent Vance Zucker, Adam Justin Lieberman
  • Publication number: 20200134856
    Abstract: A process determines a position of an image capture device with respect to a physical object. The position corresponds to a vantage point for an initial image capture of the physical object performed by the image capture device at a first time. Further, the process generates an image corresponding to the position. In addition, the process displays the image on the image capture device. Finally, the process outputs one or more feedback indicia that direct a user to orient the image capture device to the image for a subsequent image capture at a second time within a predetermined tolerance threshold of the vantage point.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: Steven Chapman, Mehul Patel, Joseph Popp, Alice Taylor
  • Publication number: 20200134857
    Abstract: Methods and apparatus for determining poses of objects acquire plural images of the objects from different points of view. The images may be obtained by plural cameras arranged in a planar array. Each image may be processed to identify features such as contours of objects. The images may be projected onto different depth planes to yield depth plane images. The depth plane images for each depth plane may be compared to identify features lying in the depth plane. A pattern matching algorithm may be performed on the features lying in the depth plane to determine the poses of one or more of the objects. The described apparatus and methods may be applied in bin-picking and other applications.
    Type: Application
    Filed: June 21, 2018
    Publication date: April 30, 2020
    Inventors: Armin KHATOONABADI, Mehdi Patrick STAPLETON
  • Publication number: 20200134858
    Abstract: An apparatus for extracting object information according to one embodiment includes: a padding image generator for generating a padding image including an original image; a partial image acquirer for acquiring a plurality of partial images of the padding image; an object classification result acquirer for acquiring an object classification result for each of the plurality of partial images using an object classification model; a confidence map generator for generating a confidence map having a size the same as that of the padding image and including a confidence value on the basis of the object classification result; and an object information acquirer for acquiring information on an object in the padding image on the basis of the confidence map.
    Type: Application
    Filed: October 28, 2019
    Publication date: April 30, 2020
    Inventors: Hee-Sung Yang, Seong-Ho Jo, Joong-Bae Jeon, Do-Young Park
  • Publication number: 20200134859
    Abstract: Embodiments of the present application disclose methods and apparatuses for determining a bounding box of a target object, media, and devices. The method includes: obtaining attribute information of each of a plurality of key points of a target object; and determining a bounding box position of the target object according to the attribute information of each of the plurality of key points of the target object and to a preset neural network. The implementations of the present application can improve the efficiency and accuracy of determining a bounding box of a target object.
    Type: Application
    Filed: December 31, 2019
    Publication date: April 30, 2020
    Inventors: Buyu Li, Quanquan Li, Junjie Yan
  • Publication number: 20200134860
    Abstract: A machine vision-based method and system for measuring 3D pose of a part or subassembly of parts having an unknown pose are disclosed. A number of different applications of the method and system are disclosed including applications which utilize a reprogrammable industrial automation machine such as a robot. The method includes providing a reference cloud of 3D voxels which represent a reference surface of a reference part or subassembly having a known reference pose. Using at least one 2D/3D hybrid sensor, a sample cloud of 3D voxels which represent a corresponding surface of a sample part or subassembly of the same type as the reference part or subassembly is acquired. The sample part or subassembly has an actual pose different from the reference pose. The voxels of the sample and reference clouds are processed utilizing a matching algorithm to determine the pose of the sample part or subassembly.
    Type: Application
    Filed: October 30, 2018
    Publication date: April 30, 2020
    Applicant: Liberty Reach Inc.
    Inventors: G. Neil Haven, Gary William Bartos, Michael Kallay, Fansheng Meng
  • Publication number: 20200134861
    Abstract: Cameras capture time-stamped images of predefined areas. Individuals and item are tracked in the images. A time-series set of images are processed to determine actions taken by the individuals with respect to the items or to determine relationships between the individuals to the items.
    Type: Application
    Filed: October 30, 2018
    Publication date: April 30, 2020
    Inventors: Brent Vance Zucker, Adam Justin Lieberman
  • Publication number: 20200134862
    Abstract: Disclosed are systems and methods for mapping multiple views to an identity. The systems and methods may include receiving a plurality of images that depict an object. Attributes associated with the object may be extracted from the plurality of images. An identity of the object may be determined based on processing the attributes.
    Type: Application
    Filed: October 30, 2018
    Publication date: April 30, 2020
    Inventors: Brent Vance Zucker, Adam Justin Lieberman
  • Publication number: 20200134863
    Abstract: Operations of the present disclosure may include receiving a group of images taken by a camera over time in an environment. The operations may also include identifying a first position of an object in a target region of the environment in a first image of the group of images and identifying a second position of the object in a second image of the group of images. Additionally, the operations may include determining an estimated trajectory of the object based on the first position of the object and the second position of the object. The operations may further include, based on the estimated trajectory, estimating a ground position in the environment associated with a starting point of the estimated trajectory of the object. Additionally, the operations may include providing the ground position associated with the starting point of the estimated trajectory of the object for display in a graphical user interface.
    Type: Application
    Filed: October 30, 2018
    Publication date: April 30, 2020
    Inventors: Xu Jin, Xi Tao, Boyi Zhang, Batuhan Okur
  • Publication number: 20200134864
    Abstract: A method for determining a location of a vehicle, the method may include receiving reference visual information that represents multiple reference images acquired at predefined locations; acquiring, by a visual sensor of the vehicle, an acquired image of an environment of the vehicle; generating, based on the acquired image, acquired visual information related to the acquired image, wherein the acquired visual information comprises acquired static visual information that is related to the environment of the vehicle; searching for a selected reference image out of the multiple reference images, the selected reference image comprises selected reference static visual information that best matches the acquired static visual information; and determining an actual location of the vehicle based on a predefined location of the selected reference image and to a relationship between the acquired static visual information and to the selected reference static visual information; and wherein the determining of the actual
    Type: Application
    Filed: August 20, 2019
    Publication date: April 30, 2020
    Inventors: Igal Raichelgauz, Karina Odinaev
  • Publication number: 20200134865
    Abstract: An image processing device comprises: a result acquisition unit that acquires one or more of the input images including a target, and acquires a detection result obtained by comparing feature points of standard shape information with input-side feature points extracted from the input image; a frequency calculation unit that acquires multiple detection results in which the standard shape information and the target are placed in different positions and different postures, and calculates frequencies of detection of the input-side feature points in the input image for corresponding ones of the feature points of the standard shape information; and a feature point selection unit that selects a notable feature point from the feature points of the standard shape information on the basis of the frequencies calculated by the frequency calculation unit.
    Type: Application
    Filed: October 7, 2019
    Publication date: April 30, 2020
    Applicant: FANUC CORPORATION
    Inventors: Yuta NAMIKI, Shoutarou OGURA
  • Publication number: 20200134866
    Abstract: A position estimation system includes one or more memories and one or more processors configured to acquire a first imaging position measured at a time of imaging a first image among a plurality of images imaged in time series, perform, based on a feature of the first image, calculation of a second imaging position of the first image, and perform, in accordance with a constraint condition that reduces a deviation between the first imaging position and the second imaging position, correction of at least one of the second imaging position or a three-dimensional position of a point included in the first image calculated based on the feature of the first image.
    Type: Application
    Filed: October 24, 2019
    Publication date: April 30, 2020
    Applicant: FUJITSU LIMITED
    Inventors: ASAKO KITAURA, Takushi Fujita
  • Publication number: 20200134867
    Abstract: There is provided a method and system for determining if a head-mounted device for extended reality (XR) is correctly positioned on a user, and optionally performing a position correction procedure if the head-mounted device is determined to be incorrectly positioned on the user. Embodiments include: performing eye tracking by estimating, based on a first image of a first eye of the user, a position of a pupil in two dimensions; determining whether the estimated position of the pupil of the first eye is within a predetermined allowable area in the first image; and, if the determined position of the pupil of the first eye is inside the predetermined allowable area, concluding that the head-mounted device is correctly positioned on the user; or, if the determined position of the pupil of the first eye is outside the predetermined allowable area, concluding that the head-mounted device is incorrectly positioned on the user.
    Type: Application
    Filed: October 25, 2019
    Publication date: April 30, 2020
    Applicant: Tobii AB
    Inventors: Joakim Zachrisson, Mikael Rosell, Carlos Pedreira, Mark Ryan, Simon Johansson