Patents Issued in May 9, 2024
-
Publication number: 20240153082Abstract: Disclosed is a computer-implemented three-dimensional image classification system (CIS) for processing and/or analyzing non-contrast computed tomography (CT) medical imaging data. The CIS is a deep neural network containing multiple Convolutional Block Attention Module (CBAM) blocks, which contain convolutional layers for feature extraction followed by CBAMs. The CBAM applies channel attention to highlight more relevant features and spatial attention to focus on more important regions. Max pooling layers operably link adjacent pairs of CBAM blocks. The output of the final CBAM block is passed to two terminal fully connected layers to generate a diagnosis. This classification system can be used to perform efficient diagnosis of hepatocellular carcinoma using solely non-contrast CT images, with diagnostic performance comparable to that of a radiologist using the current LIRADS system.Type: ApplicationFiled: September 21, 2023Publication date: May 9, 2024Inventors: Chengzhi Peng, Leung Ho Philip Yu, Wan Hang Keith Chiu, Xianhua Mao, Man Fung Yuen, Wai Kay Walter Seto
-
Publication number: 20240153083Abstract: There is provided a cell quality evaluation apparatus that performs a process including estimation processing of estimating a quality of cells in the entirety of a cell-culture container by determining feature quantities from a plurality of images and calculating an average value of the feature quantities; derivation processing of deriving an estimation error of the quality estimated in the estimation processing, based on a variation of the feature quantities in the plurality of images and imaging information related to an area of a plurality of imaging regions; and imaging control processing of causing a imaging apparatus to perform re-imaging on at least one re-imaging region different from the plurality of imaging regions in a case where the estimation error is out of an allowable range.Type: ApplicationFiled: December 11, 2023Publication date: May 9, 2024Inventor: Yasushi SHIRAISHI
-
Publication number: 20240153084Abstract: A method for identifying an intestinal disorder in a subject, including: obtaining an image of a small intestinal region of the subject; identifying a villus structure in the image; measuring at least one feature based on the identified villus structure to obtain a metric; and identifying an intestinal disorder based on the metric.Type: ApplicationFiled: March 7, 2022Publication date: May 9, 2024Inventors: Guillermo J. Tearney, Gududappanavar Nagarajappa Girish, David Odeke Otuya
-
Publication number: 20240153085Abstract: A method implemented by computer means for training a decision system for segmenting medical images from a training set of annotated medical images, the segments belonging to at least one class, each annotation of the medical images including quantitative information about a number of pixels of the image that belongs to each of the classes, the method using weakly-supervised algorithm based on a percentage of the pixels of the image belonging to a concerned class.Type: ApplicationFiled: March 11, 2022Publication date: May 9, 2024Applicants: INSTITUT GUSTAVE ROUSSY, UNIVERSITE PARIS-SACLAY, CENTRALE SUPELECInventors: Marvin LEROUSSEAU, Eric DEUTSCH
-
Publication number: 20240153086Abstract: An imaging analysis device according to the present invention includes: an analysis execution unit (1) configured to perform analysis by a predetermined analysis method for each of a plurality of measurement points set in a measurement area on a sample and collect imaging data; a reference image acquiring unit (2) configured to acquire a reference image for the measurement area; a regression analysis executer (36) configured to, with respect to the imaging data and the reference image obtained for the same sample, perform predetermined regression analysis calculation with the imaging data as an explanatory variable and data constituting the reference image as a target variable and acquire a regression model; and a predicted image creator (37) configured to apply, to the regression model, imaging data obtained by the analysis execution unit using a sample different from the sample used in the regression analysis executer, and create a predicted image based on a pseudo regression analysis result.Type: ApplicationFiled: November 14, 2019Publication date: May 9, 2024Applicant: SHIMADZU CORPORATIONInventor: Shinichi YAMAGUCHI
-
Publication number: 20240153087Abstract: Automated image analysis used in vascular state modeling. Coronary vasculature in particular is modeled in some embodiments. Methods of “virtual revascularization” of a presently stenotic vasculature are described; useful, for example, as a reference in disease state determinations. Structure and uses of a model which relates records comprising acquired images or other structured data to a vascular tree representation are described.Type: ApplicationFiled: October 6, 2023Publication date: May 9, 2024Inventors: Guy Lavi, Uri Merhav, Ifat Lavi
-
Publication number: 20240153088Abstract: In parallel with an operation of browsing a pathological-tissue image, retrieval of similar cases from past cases using image information about the pathological-tissue image is automatically performed. An analysis apparatus of the present disclosure includes a first setting unit configured to set sample regions in an analysis target region of an image obtained by imaging of a biologically-originated sample, on the basis of an algorithm; a processing unit configured to select at least one reference image from a plurality of reference images associated with a plurality of cases, on the basis of images of the sample regions; and an output unit configured to output the selected reference image.Type: ApplicationFiled: February 17, 2022Publication date: May 9, 2024Inventors: HIROKI DANJO, KAZUKI AISAKA, TOYA TERAMOTO, KENJI YAMANE
-
Publication number: 20240153089Abstract: Real-time cardiac MRI images may be captured continuously across multiple cardiac phases and multiple slices. Machine learning-based techniques may be used to determine spatial (e.g., slices and/or views) and temporal (e.g., cardiac cycles and/or cardiac phases) properties of the cardiac images such that the images may be arranged into groups based on the spatial and temporal properties of the images and the requirements of a cardiac analysis task. Different groups of the cardiac MRI images may also be aligned with each other based on the timestamps of the images and/or by synthesizing additional images to fill in gaps.Type: ApplicationFiled: November 7, 2022Publication date: May 9, 2024Applicant: Shanghai United Imaging Intelligence Co., Ltd.Inventors: Xiao Chen, Zhang Chen, Terrence Chen, Shanhui Sun
-
Publication number: 20240153090Abstract: The image processing device 1X includes a classification means 31X, an image selection means 33X, and a region extraction means 34X. The classification means 31X classifies each captured image acquired in time series by photographing an inspection target by a photographing unit provided in an endoscope, according to whether or not the each captured image includes an attention part to be paid attention to. The image selection means 33X selects a target image to be subjected to extraction of a region of the attention part from the each captured image, based on a result of the classification. The region extraction means 34X extracts the area of the attention part from the target image.Type: ApplicationFiled: March 1, 2021Publication date: May 9, 2024Applicant: NEC CorporationInventor: Masahiro SAIKOU
-
Publication number: 20240153091Abstract: Some methods comprise, for each of one or more lesions of a patients brain, from data taken at first and second times, determining two or more lesion characteristics including a lesion's volume change and whether the lesion moved toward a center of the brain. Some methods comprise, for each lesion, determining whether one or more criteria are satisfied, including a volume-and-displacement-based criterion that is satisfied when the change in the volume of the lesion is less than zero and the lesion moved toward the brain's center or the change in the volume of the lesion is greater than zero and the lesion did not move toward the brain's center. Some methods comprise characterizing whether the patient has multiple sclerosis and/or the progression, regression, or stability of multiple sclerosis based at least in part on the assessment of the one or more criteria for each of the lesion(s).Type: ApplicationFiled: March 8, 2022Publication date: May 9, 2024Applicant: The Board of Regents of The University of Texas SystemInventor: Darin T. OKUDA
-
Publication number: 20240153092Abstract: In one embodiment, a method includes accessing, by a computing device, a thermal image of a space within a cooking apparatus, wherein the space contains a food item. The method further includes segmenting, by a computing device and based on the thermal image, the food item in the space from the food item's environment.Type: ApplicationFiled: November 8, 2022Publication date: May 9, 2024Inventors: Pedro Martinez Lopez, Brian Patton, Nigel Clarke, William Augustus Workman, Megan Rowe
-
Publication number: 20240153093Abstract: An open-vocabulary diffusion-based panoptic segmentation system is not limited to perform segmentation using only object categories seen during training, and instead can also successfully perform segmentation of object categories not seen during training and only seen during testing and inferencing. In contrast with conventional techniques, a text-conditioned diffusion (generative) model is used to perform the segmentation. The text-conditioned diffusion model is pre-trained to generate images from text captions, including computing internal representations that provide spatially well-differentiated object features. The internal representations computed within the diffusion model comprise object masks and a semantic visual representation of the object. The semantic visual representation may be extracted from the diffusion model and used in conjunction with a text representation of a category label to classify the object.Type: ApplicationFiled: May 1, 2023Publication date: May 9, 2024Inventors: Jiarui Xu, Shalini De Mello, Sifei Liu, Arash Vahdat, Wonmin Byeon
-
Publication number: 20240153094Abstract: Described herein are systems, methods, and instrumentalities associated with automatically annotating a tubular structure (e.g., such as a blood vessel, a catheter, etc.) in medical images. The automatic annotation may be accomplished using a machine-learning image annotation model and based on a marking of the tubular structure created or confirmed by a user. A user interface may be provided for a user to create, modify, and/or confirm the marking, and the ML model may be trained using a training dataset that comprises marked images of the tubular structure paired with ground truth annotations of the tubular structure.Type: ApplicationFiled: November 7, 2022Publication date: May 9, 2024Applicant: Shanghai United Imaging Intelligence Co., Ltd.Inventors: Yikang Liu, Shanhui Sun, Terrence Chen
-
Publication number: 20240153095Abstract: A side outer extraction method may include receiving, by at least one processor, an image, of a vehicle, preprocessed from three-dimensional (3D) data from a computer-aided design (CAD) module, detecting, using an artificial intelligence model, a classification value and a bounding box for each region, of a plurality of regions, corresponding to one of a plurality of target references of the preprocessed image, transmitting, to the CAD module, a signal indicating the classification value and the bounding box for each region of the plurality of regions, and causing extraction, by the CAD module, of the plurality of target references from the classification value and the bounding box for each region of the plurality of regions, based on the received signal.Type: ApplicationFiled: August 7, 2023Publication date: May 9, 2024Inventors: SungHyun Park, Sang Hwan Jun, Jee-Hyong Lee, Eun-Ho Lee, Tae-Hyun Kim, Jin Sub Lee
-
Publication number: 20240153096Abstract: Various embodiments described herein relate to a method, apparatus, and a non-transitory machine-readable storage medium including one or more of the following: locating a 2D plane correlated with a 3D mesh representing a surface of a room; taking a virtual 2D picture of a 3D mesh along the 2D segment; within the virtual 2D picture, finding a hole; determining the vertical picture floor, and hole width; when the hole intersects the vertical picture floor and is at least as wide as a door width then classifying the hole as a door; else classifying the hole as a window.Type: ApplicationFiled: November 7, 2022Publication date: May 9, 2024Inventor: Justin Meiners
-
Publication number: 20240153097Abstract: In one aspect, an example method for generating a candidate image for use as backdrop imagery for a graphical user interface is disclosed. The method includes receiving a raw image and determining an edge image from the raw image using edge detection. The method also includes identifying a candidate region of interest (ROI) in the raw image based on the candidate ROI enclosing a portion of the edge image having edge densities exceeding a threshold edge density. The method also includes manipulating the raw image relative to a backdrop imagery canvas for a graphical user interface based on a location of the candidate ROI within the raw image. The method also includes generating, based on the manipulating, a set of candidate backdrop images in which at least a portion of the candidate ROI occupies a preselected area of the backdrop imagery canvas, and storing the set of candidate backdrop images.Type: ApplicationFiled: January 18, 2024Publication date: May 9, 2024Inventors: Aneesh Vartakavi, Jeffrey Scott
-
Publication number: 20240153098Abstract: A chessboard corner detection method on a camera image of the DVS-camera unified calibration board is provided. The method detects the multiple corners of the chessboard in the camera image by expanding the areas in color opposite to the binarized blob to eliminate the blob and separates the squares in the image, wherein the outer edge of the chessboard with the same color as the binarized spots needs to be filled with the opposite color in advance. Each corner can be marked then at the middle point of the line connecting the separated adjacent squares.Type: ApplicationFiled: March 29, 2021Publication date: May 9, 2024Inventor: Rengao Zhou
-
Publication number: 20240153099Abstract: The invention discloses a system for image segmentation for detecting an object. More particularly, the system performs two-stage segmentation on the object in the image to generate an enhanced image. The first stage is the object detection, followed by a second stage including segmentation. The invention segments the object from a background of the image to create an enhanced image.Type: ApplicationFiled: November 1, 2022Publication date: May 9, 2024Inventors: Xibeijia Guan, Tiecheng Wu, Bo Li
-
Publication number: 20240153100Abstract: An image foreground-background segmentation method and system based on sparse decomposition and graph Laplacian regularization are disclosed. Firstly, an image is divided into a plurality of non-overlapping image blocks; Then, a foreground-background segmentation model of the image is established according to the image blocks; An image segmentation problem is divided into several sub-problems, which are solved by iteration; Finally, after the iteration, solutions of the problem are obtained; The obtained solutions are respectively matrixed and patched to obtain a foreground image, which is a foreground image of the whole image. The image foreground-background segmentation method uses the linear combination of graph Fourier basis functions to better represent the smooth background region. In addition, the graph Laplacian regularization is used to characterize the connectivity of foreground text and graphics while keeping sharp foreground text and graphics contours.Type: ApplicationFiled: December 21, 2022Publication date: May 9, 2024Inventors: Junzheng Jiang, Tingfang Tan, Jiang Qian
-
Publication number: 20240153101Abstract: A method for scene synthesis from human motion is described. The method includes computing three-dimensional (3D) human pose trajectories of human motion in a scene. The method also includes generating contact labels of unseen objects in the scene based on the computing of the 3D human pose trajectories. The method further includes estimating contact points between human body vertices of the 3D human pose trajectories and the contact labels of the unseen objects that are in contact with the human body vertices. The method also includes predicting object placements of the unseen objects in the scene based on the estimated contact points.Type: ApplicationFiled: October 25, 2023Publication date: May 9, 2024Applicants: TOYOTA RESEARCH INSTITUTE, INC., THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITYInventors: Sifan YE, Yixing WANG, Jiaman LI, Dennis PARK, C. Karen LIU, Huazhe XU, Jiajun WU
-
Publication number: 20240153102Abstract: A method of tracking objects detected through light detection and ranging (LiDAR) points can include, when two or more objects are moved in a previous frame and classified as one object in a current frame, clustering LiDAR points in the current frame into a plurality of clusters, finding center points of the plurality of clusters in the current frame, matching center points of the two or more objects in the previous frame with the center points of the plurality of clusters in the current frame, and updating positions of the center points of the two or more objects according to the matching in the current frame.Type: ApplicationFiled: November 7, 2023Publication date: May 9, 2024Inventors: Chang Hwan CHUN, Sung Oh PARK
-
Publication number: 20240153103Abstract: Tracking-based motion deblurring via coded exposure is provided. Fast object tracking is useful for a variety of applications in surveillance, autonomous vehicles, and remote sensing. In particular, there is a need to have these algorithms embedded on specialized hardware, such as field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs), to ensure energy-efficient operation while saving on latency, bandwidth, and memory access/storage. In an exemplary aspect, an object tracker is used to track motion of one or more objects in a scene captured by an image sensor. The object tracker is coupled with coded exposure of the image sensor, which modulates photodiodes in the image sensor with a known exposure function (e.g., based on the object tracking). This allows for motion blur to be encoded in a characteristic manner in image data captured by the image sensor. Then, in post-processing, deblurring is performed using a computational algorithm.Type: ApplicationFiled: November 21, 2023Publication date: May 9, 2024Applicant: Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Suren Jayasuriya, Odrika Iqbal, Andreas Spanias
-
Publication number: 20240153104Abstract: The disclosed embodiment provides an apparatus and method for measuring eye movement, that can accurately measure eye movement by determining the eye position at high speed and high resolution even with a low-cost camera by using a phase mask and rolling shutter method instead of a lens, and can perform early diagnosis of neurological diseases, etc. based on the measured eye movement.Type: ApplicationFiled: November 2, 2022Publication date: May 9, 2024Inventors: Seung Ah LEE, Tae Young KIM, Kyung Won LEE, Nak Kyu BAEK, Jae Woo JUNG
-
Publication number: 20240153105Abstract: A method for sparse optical flow based tracking in a computer vision system is provided that includes detecting feature points in a frame captured by a monocular camera in the computer vision system to generate a plurality of detected feature points, generating a binary image indicating locations of the detected feature points with a bit value of one, wherein all other locations in the binary image have a bit value of zero, generating another binary image indicating neighborhoods of currently tracked points, wherein locations of the neighborhoods in the binary image have a bit value of zero and all other locations in the binary image have a bit value of one, and performing a binary AND of the two binary images to generate another binary image, wherein locations in the binary image having a bit value of one indicate new feature points detected in the frame.Type: ApplicationFiled: January 17, 2024Publication date: May 9, 2024Inventors: Deepak Kumar PODDAR, Anshu JAIN, Desappan KUMAR, Pramod Kumar SWAMI
-
Publication number: 20240153106Abstract: An object of the present invention is to generate a highly accurate trail by correcting detection box information of an object in an object tracking apparatus that generates a trail of an object within a measurement range.Type: ApplicationFiled: March 7, 2022Publication date: May 9, 2024Applicant: HITACHI ASTEMO, LTD.Inventors: So SASATANI, Goshi SASAKI, Trongmun JIRALERSPONG
-
Publication number: 20240153107Abstract: Systems and methods for performing three-dimensional multi-object tracking are disclosed herein. In one example, a method includes the steps of determining a residual based on augmented current frame detection bounding boxes, augmented previous frame detection bounding boxes, augmented current frame shape descriptors, and augmented previous frame shape descriptors and predicting an affinity matrix using the residual. The residual indicates a spatiotemporal and shape similarity between current detections in a current frame point cloud data and previous detections in a previous frame point cloud data. The affinity matrix indicates associations between the previous detections and the current detections, as well as the augmented anchors.Type: ApplicationFiled: May 10, 2023Publication date: May 9, 2024Applicants: Toyota Research Institute, Inc., The Board of Trustees of the Leland Stanford Junior University, Toyota Jidosha Kabushiki KaishaInventors: Jie Li, Rares A. Ambrus, Taraneh Sadjadpour, Christin Jeannette Bohg
-
Publication number: 20240153108Abstract: An image processing apparatus includes a tracking target setter that sets a tracking target in a first frame of a video, a first feature tracker that tracks the tracking target in a second frame based on a first feature of the tracking target set in the first frame, a second feature tracker that tracks the tracking target in the second frame based on a second feature of the tracking target set in the first frame, a tracking manager that mixes a tracking result obtained by the first feature tracker with a tracking result obtained by the second feature tracker at a predetermined mixing ratio, and an output unit that outputs a detection position of the tracking target in the second frame based on a mixing result obtained by the tracking manager.Type: ApplicationFiled: December 20, 2021Publication date: May 9, 2024Inventor: Tatsuki SAWADA
-
Publication number: 20240153109Abstract: Disclosed herein are systems and methods for image-based tracking of an object. In some embodiments, a method comprises receiving frames captured with a video camera. In some embodiments, the method comprises identifying, using a model, foreground pixels in a frame captured with the video camera, the foreground pixels corresponding to an identified foreground object. In some embodiments, the method comprises tracking the foreground object using the model.Type: ApplicationFiled: March 22, 2022Publication date: May 9, 2024Applicant: Angarak, Inc.Inventors: Sai Akhil Reddy KONAKALLA, Satya Abhiram THELI, Dominique E. MEYER
-
Publication number: 20240153110Abstract: The present disclosure relates to a target tracking method, apparatus, device and a medium. The target tracking method includes: acquiring a target video, where the target video includes a first image frame and a first subsequent image frame sequence after and adjacent to the first image frame; performing polygon detection on the first image frame to obtain each vertex of a target polygon; and tracking each vertex of the target polygon in the first subsequent image frame sequence according to a first vertex position of each vertex of the target polygon in the first image frame. According to embodiments of the present disclosure, real-time performance and accuracy of tracking the target polygon can be improved.Type: ApplicationFiled: March 11, 2022Publication date: May 9, 2024Inventors: Hengkai GUO, Sicong DU
-
Publication number: 20240153111Abstract: A computer-implemented technique for determining a surface registration between a first soft tissue surface defined based on mechanically acquired first surface data and a second soft tissue surface defined based on image data is provided. A method implementation of the technique includes obtaining the first surface data. The first surface data include a first set of points mechanically acquired by contacting the soft tissue with a pointing device. The method also includes applying a correction model on the first surface data to obtain corrected first surface data. The correction model is configured to shift relative positions of two or more points in the first set. Further still, the method includes determining a surface registration between the first and the second soft tissue surfaces based at least in part on the corrected first surface data.Type: ApplicationFiled: November 7, 2023Publication date: May 9, 2024Applicant: Stryker European Operations LimitedInventors: Marc Kaeseberg, Christian Winne
-
Publication number: 20240153112Abstract: A specimen image registration according to the invention includes selecting at least one third image different from a first image and a second image which are target images from the plurality of specimen images, obtaining a registration amount between the first image and the third image and a registration amount between the second image and the third image, and obtaining a registration amount between the first image and the second image based on the registration amount between the first image and the third image and the registration amount between the second image and the third image. It is possible to improve a success rate of registration even if the specimen images obtained by a plurality of types of staining are largely different in stained state.Type: ApplicationFiled: March 9, 2022Publication date: May 9, 2024Inventors: Maki HIRAI, Hiroshi OGI, Tomoyasu FURUTA
-
Publication number: 20240153113Abstract: A medical system comprises an elongate device, an elongate sheath configured to extend within the elongate device, and an imaging probe configured to extend within the elongate sheath. The elongate sheath includes an identification feature. The medical system further comprises a control system configured to receive imaging data from the imaging probe. The imaging data is captured by the imaging probe. The control system is further configured to analyze the imaging data to identify an appearance of the identification feature within the imaging data. The control system is further configured to, based on the appearance of the identification feature, register the imaging data to a reference frame of the elongate device.Type: ApplicationFiled: March 9, 2022Publication date: May 9, 2024Inventors: Lucas S. Gordon, Julie Walker, Troy K. Adebar, Benjamin G. Cohn, Randall L. Schlesinger, Worth B. Walters
-
Publication number: 20240153114Abstract: A method is provided of obtaining body depth information for a patient who is lying on patient support. A patient support depth map of the upper surface of the patient support is obtained without the patient as well as an image and patient depth map of the patient on the patient support. Landmark body positions of the patient are extracted from the image so that points in a region of interest can be mapped to points of a template, using the identified landmark body positions. A body thickness is obtained for said points using the template, the depth value for the respective point and the patient support depth value for the respective point.Type: ApplicationFiled: February 23, 2022Publication date: May 9, 2024Inventors: LENA CHRISTINA FRERKING, JULIEN THOMAS SENEGAS, DANIEL BYSTROV
-
Publication number: 20240153115Abstract: Retrographic sensors described herein may provide smaller sensors capable of high-resolution three-dimensional reconstruction of an object in contact with the sensor. Such sensors may be used by robots for work in narrow environments, fine manipulation tasks, and other applications. To provide a smaller sensor, a reduced number of light sources may be provided in the sensor in some embodiments. For example, three light sources, two light sources, or one light source, may be used in some sensors. When fewer light sources are provided, full color gradient information may not be provided. Instead, the missing gradients in one direction or other information related to a three-dimensional object in contact with the sensor may be determined using gradients in a different direction that were provided by the real data. This may be done using a trained statistical model, such as a neural network, in some embodiments.Type: ApplicationFiled: January 6, 2022Publication date: May 9, 2024Applicant: Massachusetts Institute of TechnologyInventors: Shaoxiong Wang, Branden Romero, Yu She, Edward Adelson
-
Publication number: 20240153116Abstract: Provided are computing systems, methods, and platforms for using machine-learned models to generate a depth map. The operations can include projecting, using a dot illuminator, near-infrared (NIR) dots on the scene. The NIR dots can have a uniform pattern. Additionally, the operations can include capturing, using a single NIR camera, the projected NIR dots on the scene. Moreover, the operations can include generating a dot image based on the captured NIR dots on the scene. Furthermore, the operations can include processing the dot image with a machine-learned model to generate a depth map of the scene. Subsequently, the operations can further include evaluating the generated depth map of the scene and a ground truth depth map, and performing an action based on the evaluation.Type: ApplicationFiled: November 7, 2022Publication date: May 9, 2024Inventors: Kuntal Sengupta, Adarsh Kowdle, Andrey Zhmoginov, Hart Levy
-
Publication number: 20240153117Abstract: The present description concerns a system for determining a depth image of a scene, configured to project a spot pattern onto the scene and acquire an image of the scene; determining I and Q values of the image pixels; determining, for each pixel, at least one confidence value to form a confidence image; determining the local maximum points of the confidence image having a confidence value greater than a first threshold; selecting, for each local maximum point, pixels around the local maximum point having a confidence value greater than a second threshold; determining a value Imoy equal to the average of the I values of the selected pixels and a value Qmoy equal to the average of the Q values of the selected pixels; and determining the depth of the local maximum point based on values Imoy and Qmoy.Type: ApplicationFiled: October 17, 2023Publication date: May 9, 2024Inventors: Jeremie Teyssier, Cedric Tubert, Thibault Augey, Valentin Rebiere, Thomas Bouchet
-
Publication number: 20240153118Abstract: A method for estimating a depth map associated with a hologram representing a scene, the method includes steps of: reconstruction of images of the scene, each image being associated with a depth; decomposition of each image into a plurality of thumbnails adjacent to each other, each thumbnail being associated with the depth and including a plurality of pixels; determination, for each thumbnail, of a focus map by supplying, at the input of a network of neurons, values associated with the pixels of the thumbnail, to obtain, at the output of the network, the focus map including a focus level associated with the pixel concerned; and determination of a depth value, for each point of a depth map, as a function of the focus levels obtained. The invention also relates to an estimation device and an associated computer program.Type: ApplicationFiled: November 1, 2023Publication date: May 9, 2024Inventors: Nabil MADALI, Antonin GILLES, Patrick GIOIA, Luce MORIN
-
Publication number: 20240153119Abstract: Provided is a system configured to obtain a set of images via a camera of the computing device, input the set of images into a neural network, and detect a target physical object with the neural network. The system may determine a contour of the target physical object and a first three-dimensional reconstruction of the target physical object. The system may generate a virtual representation and a virtual object based on attributes of the virtual representation, where a first attribute of the set of attributes includes the first three-dimensional reconstruction. The system may associate the virtual object with the virtual representation and displays the virtual object at pixel coordinates of a display that at least partially occlude at least part of the target physical object, where a position of the virtual object is computed based on the contour.Type: ApplicationFiled: January 12, 2024Publication date: May 9, 2024Inventor: Andrew Thomas Busey
-
Publication number: 20240153120Abstract: The present invention relates to a Method to determine the depth of a scene (I) from at least one digital image (R, T) of said scene (I), comprising the following steps: A. acquiring said at least one digital image (R, T) of said scene (I); B. calculating a first disparity map (DM1) from said at least one digital image (R, T), wherein said first disparity map (DM1) consists of a matrix of pixels pij with i=1, . . . , M and j=1, . . . , N, where i and j indicate respectively the line and column index of said first disparity map (DM1), and M and N are positive integers; C. calculating a second disparity map (DM2), by a neural network, from said at least one digital image (R, T); D. selecting a plurality of sparse depth data Sij relative to the respective pixel pij of said first disparity map (DM1); E. extracting said plurality of sparse depth data S ii from said first disparity map (DM1); and F.Type: ApplicationFiled: November 19, 2020Publication date: May 9, 2024Applicant: Alma Mater Studiorum - Università di BolognaInventors: Matteo POGGI, Fabio TOSI, Stefano MATTOCCIA, Luigi DI STEFANO, Alessio TONIONI
-
Publication number: 20240153121Abstract: According to an aspect, a real-time active stereo system includes a capture system configured to capture stereo image data, the stereo image data including reference images and secondary images, and a depth sensing computing system configured to generate a depth map, the depth sensing computing system configured to compute descriptors based on the reference images and the secondary images compute a stability penalty based on pixel change information and disparity change information. evaluate a plurality of plane hypotheses for a group of pixels using the descriptors, including compute matching cost between the descriptors associated with each plane hypothesis, update the matching cost with the stability penalty, and select a plane hypothesis from the plurality of plane hypotheses for the group of pixels based on the updated matching cost.Type: ApplicationFiled: March 3, 2021Publication date: May 9, 2024Inventors: Harris Nover, Supreeth Achar, Kira Prabhu, Vineet Bhatawadekar
-
Publication number: 20240153122Abstract: A binocular vision-based environment sensing method and apparatus, is applied to an unmanned aerial vehicle. The unmanned aerial vehicle is provided with five binocular cameras. The first binocular camera is disposed at the front portion of the fuselage of the unmanned aerial vehicle. The second binocular camera is inclined upward and disposed between the left side of the fuselage and the upper portion of the fuselage of the unmanned aerial vehicle. The third binocular camera is inclined upward and disposed between the right side of the fuselage and the upper portion of the fuselage of the unmanned aerial vehicle. The fourth binocular camera is disposed at the lower portion of the fuselage of the unmanned aerial vehicle. The fifth binocular camera disposed at the rear portion of the fuselage of the unmanned aerial vehicle. The method can simplify an omni-directional sensing system while reducing the sensing blind area.Type: ApplicationFiled: March 3, 2023Publication date: May 9, 2024Inventor: Xin ZHENG
-
Publication number: 20240153123Abstract: The present invention discloses an isogeometric analysis method based on a geometric reconstruction model comprising steps of: dividing boundaries of a CAD model into triangular patches or directly based on point cloud data, generating a closed regular embedded domain, dividing the embedded domain into regular sub-domains, dividing the elements according to a positional relationship between the boundaries of the triangular patches/points and the elements into trimmed elements and untrimmed elements; calculating a minimum directional distance from each vertex of a trimmed element to a triangular patch near the trimmed element; using the minimum directional distance to divide the untrimmed elements into real and virtual elements; using the minimum directional distance as a level set function value, and using a marching tetrahedra algorithm to reconstruct a displayed geometric model based on the regular embedded domain; after obtaining the level set function value, calculating the level set function value of a GType: ApplicationFiled: November 24, 2020Publication date: May 9, 2024Inventors: Yingjun WANG, Jinghui LI, Zhencong LI, Jiancheng ZHANG, Nan WANG
-
Publication number: 20240153124Abstract: Provided are a method and an apparatus for measuring an object quantity using a 2D image. A method for measuring an object quantity using a 2D image according to one embodiment of the present disclosure comprises learning an object region model in a first object image which is a pre-learning target, learning an object quantity model for a first object region in the first object image using a feature map extracted from the first object image, and measuring an object quantity of at least one of a second object region and a background region in a second object image which is a measurement target using the learned object region model and object quantity model.Type: ApplicationFiled: December 28, 2022Publication date: May 9, 2024Applicant: NUVI LABS CO., LTD.Inventors: Dae Hoon KIM, Jey Yoon RU, Seung Woo JI
-
Publication number: 20240153125Abstract: A system and method for positioning a viewing device relative to a display, the display by leveraging the capability of a user device, such as a smartphone. An image of the display is captured using a camera and a positioning UI is superimposed over the image to indicate positioning.Type: ApplicationFiled: November 9, 2022Publication date: May 9, 2024Inventors: Joseph McCraw, Francis Bato
-
Publication number: 20240153126Abstract: In some implementations, a device may receive the image of the document, the image of the document depicting the reference feature, and the reference feature being associated with one or more location parameters for a document type associated with the document. The device may detect a location of the reference feature as depicted in the image, the location being defined by bounds of the reference feature as depicted in the image. The device may detect a border of the document as depicted in the image based on identifying one or more edges of the document based on the bounds of the reference feature and the one or more location parameters. The device may modify the image of the document based on the border of the document to obtain a cropped image. The device may transmit, to a server device, the cropped image.Type: ApplicationFiled: November 8, 2022Publication date: May 9, 2024Inventors: Jason PRIBBLE, Swapnil PATIL, Alexandria MCDONALD
-
Publication number: 20240153127Abstract: A pose tracking system for continuously determining a pose of a digital video camera while filming a scene at a set of a film or TV production, wherein the system comprises a pose tracking device that comprises 2D cameras configured to provide 2D image data of an environment and at least one time-of-flight camera comprising a sensor array and one or more laser emitters, wherein the pose tracking device is attached to or configured to be attached to the digital video camera so that the ToF camera is oriented to capture 3D point-cloud data of the scene filmed by the digital video camera, wherein the pose tracking device comprises a localization unit.Type: ApplicationFiled: November 7, 2023Publication date: May 9, 2024Applicant: HEXAGON GEOSYSTEMS SERVICES AGInventors: Ralph HARTI, Matthias WIESER, Lukas HEINZLE, Roman STEFFEN, Burkhard BÖCKEM, Axel MURGUET, Garance BRUNEAU, Pascal STRUPLER
-
Publication number: 20240153128Abstract: A method of detecting a collision of objects, an electronic device and a storage medium, which may be applied to a field of electronic technology, in particular to a field of intelligent transportation and map navigation technology. The method includes: acquiring an attribute information of each of two objects to be detected, including a rotation position of the object to be detected relative to a predetermined three-dimensional space and a size information of the object to be detected projected onto a predetermined two-dimensional plane; determining a size information of a bounding box for each object according to the size information of each object, a center of the bounding box is located at the rotation position of the object to be detected; and determining a collision result according to the rotation positions of the two objects and the size information of the bounding boxes for the two objects.Type: ApplicationFiled: November 11, 2021Publication date: May 9, 2024Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.Inventor: Da QU
-
Publication number: 20240153129Abstract: Disclosed are a vehicle and a vehicle position calculation method. The vehicle includes an image sensor.Type: ApplicationFiled: June 27, 2023Publication date: May 9, 2024Applicants: Hyundai Motor Company, KIA CORPORATIONInventors: Jihee Han, Junsik An, Jungphil Kwon
-
Publication number: 20240153130Abstract: An apparatus includes one or more processors configured to generate a plurality of feature maps having respective different resolutions based on an input image; and update, for each of the plurality of transformer layers, respective position estimation information comprising first position information of a respective bounding box corresponding to one object query and second position information of respective key points corresponding to the one object query, wherein each of the plurality of transformer layers includes a self-attention model configured to generate respective intermediate data by performing self-attention on respective content information on a feature of the input image; and a cross-attention model configured to generate respective output data by performing cross-attention on respective one or more feature maps among the plurality of feature maps and the respective generated intermediate data.Type: ApplicationFiled: June 28, 2023Publication date: May 9, 2024Applicant: Samsung Electronics Co., Ltd.Inventors: Hyeongseok SON, Seungin PARK, Byung In YOO, Dongwook LEE, Solae LEE, Sangil JUNG
-
Publication number: 20240153131Abstract: Techniques for aligning images generated by an integrated camera physically mounted to an HMD with images generated by a detached camera physically unmounted from the HMD are disclosed. A 3D feature map is generated and shared with the detached camera. Both the integrated camera and the detached camera use the 3D feature map to relocalize themselves and to determine their respective 6 DOF poses. The HMD receives the detached camera's image of the environment and the 6 DOF pose of the detached camera. A depth map of the environment is accessed. An overlaid image is generated by reprojecting a perspective of the detached camera's image to align with a perspective of the integrated camera and by overlaying the reprojected detached camera's image onto the integrated camera's image.Type: ApplicationFiled: January 17, 2024Publication date: May 9, 2024Inventors: Raymond Kirk PRICE, Michael BLEYER, Christopher Douglas EDMONDS