Patents Issued in September 12, 2024
-
Publication number: 20240303808Abstract: Method for automatic identification of segmented regions of a heart, the method being executed by a control unit and including the steps of: acquiring a heart mesh that is a 3D graphical representation of the heart, including a left ventricle, a right ventricle, a heart apex and a heart base; determining a heart base plane corresponding to the heart base; determining, based on the heart base and the heart apex, a left ventricular axis extending across the left ventricle, from the heart apex to the heart base; using the heart base plane and the left ventricular axis to identify segmented regions indicative of the left ventricle and the right ventricle, each segmented region being a respective portion of the heart mesh satisfying a respective first criterion about a distance range from the heart base plane and a respective second criterion about a circumferential angular range about the left ventricular axis.Type: ApplicationFiled: January 30, 2024Publication date: September 12, 2024Applicant: XSPLINE S.P.A.Inventors: Werner Rainer, Anastasiia Bazhutina, Mikhail Chmelevsky Petrovich, Stepan Zubarev, Margarita Budanova
-
Publication number: 20240303809Abstract: The present teachings relate to a method for transforming a digital 3D dental model to a 2D dental image, the method including: providing, at least two selected images from the 3D dental model; converting, each of the selected images, to a respective color component of a color model; obtaining, by combining the respective color components, the 2D image in a computer-readable format of the color model. The present teachings also relate to a method of classification of the 2D image, a dental procedure assisting system, uses and computer software products.Type: ApplicationFiled: May 17, 2022Publication date: September 12, 2024Applicant: DENTSPLY SIRONA Inc.Inventors: Hans-Christian SCHNEIDER, David SCHADER, Helmut SEIBERT
-
Publication number: 20240303810Abstract: An information processing apparatus including at least one processor, wherein the processor is configured to: acquire a first medical image of an examinee associated with a first reference position specified based on a physical feature of the examinee included in a first optical image obtained by optically imaging the examinee; acquire a second medical image of the examinee associated with the first reference position or a second reference position indicating a position substantially the same as the first reference position; and output a result of associating the first medical image with the second medical image based on the first reference position and the second reference position.Type: ApplicationFiled: March 5, 2024Publication date: September 12, 2024Applicant: FUJIFILM CorporationInventors: Hisatsugu HORIUCHI, Seiki MORITA, Takashi MIZOGUCHI, Sachie WADA
-
Publication number: 20240303811Abstract: First, a biological sample is imaged, and a photographic image in which intensity values are distributed is acquired. After that, a localization region corresponding to an unusual part is extracted from the photographic image. At that time, a region of which intensity value satisfies a predetermined requirement in the photographic image is extracted as the localization region. Alternatively, the photographic image is input to a trained model created in advance, and a localization region output from the trained model is obtained. In this manner, the localization region corresponding to the unusual part can be extracted from the photographic image of the biological sample. This enables noninvasive observation of the unusual part of the biological sample without processing a cell by staining or the like.Type: ApplicationFiled: March 7, 2024Publication date: September 12, 2024Inventor: Ryo HASEBE
-
Publication number: 20240303812Abstract: According to an evaluation method, first, a biological sample is taken an image, and an image in which intensity values are distributed is acquired. After that, a localization region corresponding to a fibrotic region is extracted from the taken image. At that time, a region of which intensity value satisfies a predetermined requirement in the taken image is extracted as the localization region. Alternatively, the taken image is input to a trained model created in advance, and a localization region output from the trained model is obtained. This makes it possible to noninvasively observe the fibrotic region of the biological sample, to evaluate the condition of the biological sample, without processing cells by staining or the like.Type: ApplicationFiled: March 8, 2024Publication date: September 12, 2024Inventors: Ryo HASEBE, Toshihiko Maekawa, Ayu Inoue
-
Publication number: 20240303813Abstract: In an evaluation method, first, spheroids obtained by three-dimensional culture of multiple kinds of liver-derived cells are imaged by optical coherence photography, a localization region is extracted from the photographic image, the localization region is analyzed (analysis step), and the condition of the spheroid is evaluated. The analysis step includes a first calculation step of calculating the area of the entire spheroid in the photographic image, a second calculation step of calculating the area of the localization region, a third calculation step of calculating the ratio of the localization region on the basis of the two areas, and a fourth calculation step of calculating an evaluation parameter on the basis of the ratio.Type: ApplicationFiled: March 8, 2024Publication date: September 12, 2024Inventors: Ryo HASEBE, Toshihiko Maekawa, Ayu Inoue
-
Publication number: 20240303814Abstract: Disclosed and described herein are systems and methods of performing computer-aided detection (CAD)/diagnosis (CADx) in medical images and comparing the results of the comparison. Such detection can be used for treatment plans and verification of claims produced by healthcare providers, for the purpose of identifying discrepancies between the two. In particular, embodiments disclosed herein are applied to identifying dental caries (“caries”) in radiographs and comparing them against progress notes, treatment plans, and insurance claims.Type: ApplicationFiled: April 22, 2024Publication date: September 12, 2024Inventors: Harris Bergman, Mark Blomquist, Michael Wimmer
-
Publication number: 20240303815Abstract: The present invention relates to a method for predicting a state of an object on the basis of dynamic image data and a computing device performing same, the method enabling initial dynamic image data and delay image data to be predicted by performing learning on the basis of dynamic image data captured at a time point when both blood flow image information and disease-specific biological information are included, and furthermore, enabling blood flow image information and disease-specific biological information of the object to be provided.Type: ApplicationFiled: May 17, 2024Publication date: September 12, 2024Applicants: THE ASAN FOUNDATION, UNIVERSITY OF ULSAN FOUNDATION FOR INDUSTRY COOPERATIONInventors: Jungsu OH, Jae Seung KIM, Minyoung OH, Dong Yun LEE, Seung Jun OH, Sang Ju LEE
-
Publication number: 20240303816Abstract: A method of segmenting a bowel includes receiving patient imaging comprising one or more voxels; determining a lumen indicator based on the patient imaging; representing the one or more voxels within a distance of the lumen indicator as one or more feature vectors; generating, based on the one or more feature vectors, a cluster comprising at least one of the one or move voxels; binarizing the cluster into one or more groups based on a threshold value; and generating a bowel segment model based at least on the cluster.Type: ApplicationFiled: March 8, 2024Publication date: September 12, 2024Inventors: Andrew Bard, Benjamin Barrow, Alexander Menys
-
Publication number: 20240303817Abstract: Techniques relate to analyzing image data to determine an annuloplasty ring to implant for an annuloplasty procedure. For example, image data can be received that depicts a heart valve. The image data can be analyzed to identify one or more image features that represent one or more anatomical features of the heart valve. Based on the one or more image features, heart data can be generated that indicates a measurement and/or another characteristic of the heart valve. An annuloplasty ring can be determined based on the heart valve data and/or annuloplasty ring data indicating characteristics of one or more annuloplasty rings. User interface data can then be generated that indicates the annuloplasty ring.Type: ApplicationFiled: May 7, 2024Publication date: September 12, 2024Inventors: Stephen Epstein, Wesley Paul Nicholson
-
Publication number: 20240303818Abstract: A voxel-based technique is provided for performing quantitative imaging and analysis of tissue image data. Serial image data is collected for tissue of interest at different states of the issue. The collected image data may be deformably registered, after which the registered image data is analyzed on a voxel-by-voxel basis, thereby retaining spatial information for the analysis. Various thresholds are applied to the registered tissue data to identify a tissue condition or state, such as classifying chronic obstructive pulmonary disease by disease phenotype in lung tissue, for example.Type: ApplicationFiled: May 9, 2024Publication date: September 12, 2024Inventors: Brian D. Ross, Craig Galban
-
Publication number: 20240303819Abstract: An image processing method includes: a step of creating, by a blood vessel mask creation unit, a blood vessel mask image by removing a punctate low signal intensity, removing a linear low signal intensity, extracting the linear low signal intensity, removing a punctate high signal intensity, removing a linear high signal intensity, and extracting the linear high signal intensity from an image obtained from a magnetic resonance signal intensity in which a region corresponding to a blood vessel is enhanced with respect to an MRI image; and a step of generating, by a mask processing unit, an image in which a blood vessel structure is removed using the blood vessel mask image from a phase difference enhanced image created from the MRI image.Type: ApplicationFiled: February 24, 2022Publication date: September 12, 2024Inventors: Yasuko TATEWAKI, Tetsuya YONEDA, Akira ARAI
-
Publication number: 20240303820Abstract: An image processing apparatus including: a first mask generation unit that generates a first mask based on a difference between a first frame image and a second frame image, or a difference between a first output feature map that is output from a first convolutional layer for processing the first frame image and a second output feature map that is output from a first convolutional layer of a second convolutional neural network for processing the second frame image; a second mask generation unit that generates a second mask for each of resolutions used in convolutional layers of the second convolutional neural network, based on the first mask and each of the resolutions; and a second mask distribution unit that distributes the second mask to the convolutional layers of the second convolutional neural network, based on the resolutions used in the convolutional layers of the second convolutional neural network.Type: ApplicationFiled: February 26, 2024Publication date: September 12, 2024Applicant: NEC CorporationInventor: Youki SADA
-
Publication number: 20240303821Abstract: A segmentation model learning method according to an embodiment includes learning that, based on a loss function value, includes performing supervised learning of the voxels in medical image data according to the region to which the voxels belong.Type: ApplicationFiled: March 8, 2024Publication date: September 12, 2024Applicant: CANON MEDICAL SYSTEMS CORPORATIONInventors: Xiao XUE, Gengwan LI, Bing HAN
-
Publication number: 20240303822Abstract: In one embodiment, a method includes accessing a first scan image from a set of computed tomography (CT) scan images with each CT scan image being at a first resolution, generating a first downscaled image of the first scan image by resampling the first scan image to a second resolution that is lower than the first resolution, determining coarse segmentations corresponding to organs portrayed in the first scan image by first machine-learning models based on the first downscaled image, extracting segments of the first scan image based on the coarse segmentations with each extracted segment being at the first resolution, determining fine segmentations corresponding to the respective organs portrayed in the extracted segments by second machine-learning models based on the extracted segments, and generating a segmented image of the first scan image based on the fine segmentations, wherein the segmented image comprises confirmed segmentations corresponding to the organs.Type: ApplicationFiled: May 20, 2024Publication date: September 12, 2024Inventors: Mohamed Skander JEMAA, Yury Anatolievich PETROV, Xiaoyong WANG, Nils Gustav Thomas BENGTSSON, Richard Alan Duray CARANO
-
Publication number: 20240303823Abstract: A subject extraction device (10) according to the present disclosure includes: a resolution reducing unit (111) that reduces resolution of an image to generate a low resolution image; a subject possibility extraction unit (112) that extracts possibilities of a subject portion from the low resolution image; a resolution increasing unit (113) that increases resolution of a boundary portion between the subject portion and a non-subject portion, judges whether the boundary portion is the non-subject portion pixel by pixel, and decides the subject portion and the non-subject portion on the basis of a judgement result; and a calculation resource assignment unit (114) that determines a value of the calculation resource to be assigned to the subject possibility extraction unit (112) as a first value and determines a value of the calculation resource to be assigned to the resolution increasing unit (113) as a second value according to the number of subject portions, and assigns the calculation resource to the subjectType: ApplicationFiled: July 6, 2021Publication date: September 12, 2024Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventor: Makoto MUTO
-
Publication number: 20240303824Abstract: Provided is a contour probability prediction method of probabilistically predicting a contour, the contour probability prediction method including acquiring a plurality of contour images for an image of a wafer on which a process has been performed according to a design image, calculating a contour average and a contour standard deviation from the plurality of contour images, generating a probability distribution image calculated with a predetermined probability distribution, on the basis of the contour average and the contour standard deviation, and deep-learning-training a probability prediction model by inputting the design image and the probability distribution image into the probability prediction model.Type: ApplicationFiled: March 4, 2024Publication date: September 12, 2024Applicant: Samsung Electronics Co., Ltd.Inventors: Hyeok LEE, Jaewon YANG, Gun HUH
-
Publication number: 20240303825Abstract: Systems and methods for three-dimensional object category modeling can utilize figure-ground neural radiance fields for unsupervised training and inference. For example, the systems and methods can include a foreground model and a background model that can generate an object output based at least in part on one or more learned embeddings. The foreground model and background model may process position data and view direction data in order to output color data and volume density data for a respective position and view direction. Moreover, the object category model may be trained to generate an object output, which may include an instance interpolation, a view synthesis, or a segmentation.Type: ApplicationFiled: May 21, 2024Publication date: September 12, 2024Inventors: Matthew Alun Brown, Ricardo Martin-Brualla, Keunhong Park, Christopher Derming Xie
-
Publication number: 20240303826Abstract: An image processing apparatus includes a hardware processor that acquires a plurality of frame images constituting a radiographed dynamic image of a subject, sets an analysis portion in a reference frame image among the plurality of frame images, selects one tracking algorithm from a plurality of tracking algorithms, tracks the analysis portion in a time direction on the basis of the one tracking algorithm, and outputs a tracking result of the tracking.Type: ApplicationFiled: February 21, 2024Publication date: September 12, 2024Inventors: Hiromu OHARA, Takuya YAMAMURA
-
Publication number: 20240303827Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for tracking objects in an environment across time.Type: ApplicationFiled: March 8, 2024Publication date: September 12, 2024Inventors: Longlong Jing, Ruichi Yu, Xu Chen, Zhengli Zhao, Shiwei Sheng
-
Publication number: 20240303828Abstract: A method for selecting a crop score threshold for enhancing tracking of objects in a scene captured in a video sequence is disclosed. A respective track is obtained for two different objects, each track comprising crops of object instances of the objects in in a video sequence, each crop having a crop score and a feature vector. Each track is split into respective more tracklets thereby forming four or more tracklets. For each candidate crop score threshold a respective difference between each tracklet and each other tracklet is determined based on differences between feature vectors of crops having a crop score above the candidate crop score threshold of each tracklet, and each other tracklet. A crop score threshold is selected from the set of crop score thresholds resulting in a maximum difference between the differences between tracklets of different tracks and the differences between tracklets of the same track.Type: ApplicationFiled: February 12, 2024Publication date: September 12, 2024Applicant: Axis ABInventors: Niclas Danielsson, Markus Skans, Anton Öhrn
-
Publication number: 20240303829Abstract: An object motion measurement apparatus includes an imaging device which images an object; and at least one memory and at least one processor which function as: a body motion acquisition unit configured to acquire information related to a motion of the object from an image imaged by the imaging device; an index acquisition unit configured to acquire from the image imaged by the imaging device an index which reflects vibration of an imaging apparatus, which is different from the imaging device and images the object; and an output unit configured to output the index to the imaging apparatus.Type: ApplicationFiled: February 16, 2024Publication date: September 12, 2024Inventors: Ryuichi NANAUMI, Kazuhiko FUKUTANI, Kazuya OKAMOTO
-
Publication number: 20240303830Abstract: An apparatus for generating a speech synthesis image according to a disclosed embodiment is an apparatus for generating a speech synthesis image based on machine learning, the apparatus including a first global geometric transformation predictor configured to be trained to receive each of a source image and a target image including the same person, and predict a global geometric transformation for a global motion of the person between the source image and the target image based on the source image and the target image, a local feature tensor predictor configured to be trained to predict a feature tensor for a local motion of the person based on input target image-related information, and an image generator configured to be trained to reconstruct the target image based on the global geometric transformation, the source image, and the feature tensor for the local motion.Type: ApplicationFiled: March 15, 2022Publication date: September 12, 2024Inventors: Gyeong Su CHAE, Guem Buel HWANG
-
Publication number: 20240303831Abstract: In some examples, systems and methods for user-assisted object detection are provided. For example, a method includes: receiving a first image frame of a sequence of image frames, performing object detection using an object tracker to identify an object of interest in the first image frame, based upon one or more templates associated with the object of interest in a template repository, outputting a first indicator associated with a first image portion corresponding to the identified object of interest, and receiving a user input associated with the object of interest. In some examples, the user input indicates an identified image portion in the image frame. In some examples, the method further includes generating a retargeted template, based at least in part on the identified image portion, and determining a second image portion associated with the object of interest in a second image frame of the sequence of image frames using the object tracker, based at least in part on the retargeted template.Type: ApplicationFiled: February 21, 2024Publication date: September 12, 2024Inventors: Aleksandr Patsekin, Ben Radford, Cameron Derwin, Daniel Marasco, Di Wang, Dimitrios Lymperopoulos, Elliot Kang, Keun Jae Kim, Matthew Betten, Matthew Fedderly, Michel Goraczko, Peng Lei, Prasanna Srikhanta, Rodney LaLonde, Steven Fackler, Tong Shen, Xin Li, Yue Wu
-
Publication number: 20240303832Abstract: The motion estimation of an anatomical structure may be performed using a machine-learned (ML) model trained based on medical training images of the anatomical structure and corresponding segmentation masks for the anatomical structure. During the training of the ML model, the model may be used to predict a motion field that may indicate a change between a first training image and a second training image, and to transform the first training image and a corresponding first segmentation mask based on the motion field. The parameters of the ML model may then be adjusted to maintain a correspondence between the transformed first training image and the second training image and between the transformed first segmentation mask or a second segmentation mask associated with the second training image. The correspondence may be assessed based on at least a boundary region shared by the anatomical structure and one or more other anatomical structures.Type: ApplicationFiled: March 9, 2023Publication date: September 12, 2024Applicant: Shanghai United Imaging Intelligence Co., Ltd.Inventors: Xiao Chen, Kun Han, Zhang Chen, Yikang Liu, Shanhui Sun, Terrence Chen
-
Publication number: 20240303833Abstract: Techniques are described for automated analysis and use of data acquired about a moving object of interest, such as from one or more physically mounted cameras that have at least partial coverage of the object exterior, to automatically generate a computer model of the object from visual data in images and to use the computer model to automatically estimate values for one or more object attributes. For example, the described techniques may include, for a pile of material being moved past one or more stationary cameras by one or more transporting vehicles, using acquired images that provide visual coverage of only a subset of the pile's exterior to measure the volume of the pile. The images from such devices may be acquired at various times (e.g., when triggered by object movement), and may be used to monitor movement trajectories and other attributes of one or more such objects.Type: ApplicationFiled: May 11, 2024Publication date: September 12, 2024Inventors: David B. Boardman, Jared S. Heinly, Srinivas Kapaganty, Brooke N. Steele
-
Publication number: 20240303834Abstract: A method, performed by a health management server, for evaluating a health condition by using a skeleton model. In detail, the method comprises the steps of: providing, to a user apparatus, a standard motion image of a trainer performing a standard motion; receiving, from the user apparatus, a user motion image of a user motion performed by a user following the standard motion; calculating a similarity between the standard motion and the user motion by comparing the standard motion image with the user motion image; and evaluating a health condition of the user on the basis of the calculated similarity.Type: ApplicationFiled: May 15, 2024Publication date: September 12, 2024Inventors: Seung Hyun HAN, Young Uk PARK, Mun Cheong CHOI, So Young MOON, Seong Hye CHOI, Hong Sun SONG
-
Publication number: 20240303835Abstract: A system comprises a primary camera and secondary cameras configured to capture images of a scene having a target, and a control unit connected to and receiving parameters from the cameras and configured to automatically select and activate the cameras and to allocate the target to the cameras. When the primary camera is allocated the target, the primary camera locks on and tracks the target. When the target is in a field of view of a first secondary camera, the control unit activates and controls the first secondary camera to track the target. When the target moves out of the field of view of the first secondary camera and into a field of view of a second secondary camera, the control unit deactivates the first secondary camera and activates and controls the second secondary camera to track the target. The cameras are configured to capture different views of the target.Type: ApplicationFiled: March 10, 2023Publication date: September 12, 2024Inventors: Cevat Yerli, Jesús Manzanera Lidón
-
Publication number: 20240303836Abstract: In various examples, image areas may be extracted from a batch of one or more images and may be scaled, in batch, to one or more template sizes. Where the image areas include search regions used for localization of objects, the scaled search regions may be loaded into Graphics Processing Unit (GPU) memory and processed in parallel for localization. Similarly, where image areas are used for filter updates, the scaled image areas may be loaded into GPU memory and processed in parallel for filter updates. The image areas may be batched from any number of images and/or from any number of single- and/or multi-object trackers. Further aspects of the disclosure provide approaches for associating locations using correlation response values, for learning correlation filters in object tracking based at least on focused windowing, and for learning correlation filters in object tracking based at least on occlusion maps.Type: ApplicationFiled: May 21, 2024Publication date: September 12, 2024Inventors: Joonhwa Shin, Zheng Liu, Kaustubh Purandare
-
Publication number: 20240303837Abstract: A neural network apparatus includes one or more processors comprising: a controller configured to determine a shared operand to be shared in parallelized operations as being either one of a pixel value among pixel values of an input feature map and a weight value among weight values of a kernel, based on either one or both of a feature of the input feature map and a feature of the kernel; and one or more processing units configured to perform the parallelized operations based on the determined shared operand.Type: ApplicationFiled: May 14, 2024Publication date: September 12, 2024Applicant: Samsung Electronics Co., Ltd.Inventor: Sehwan LEE
-
Publication number: 20240303838Abstract: The present disclosure provides architectures and techniques for absolute depth estimation from a single (e.g., monocular) image, using online depth scale transfer. For instance, estimation of up-to-scale depth maps from monocular images may be decoupled from estimation of the depth scale (e.g., such that additional online measurements, additional calibrations, etc. are not required). One or more aspects of the present disclosure include fine-tuning or training from scratch an absolute depth estimator using collected monocular images, as well as existing images and absolute depth measurements (e.g., from additional setups, such as LiDAR/stereo sensors). Collected monocular images may be used to create up-to-scale depth maps, and existing images and absolute depth measurements may be used to estimate the scale of a scene from the up-to-scale depth map. Scale transfer may thus be achieved between source images with known ground truth depth information and a new target domain of collected monocular images.Type: ApplicationFiled: March 8, 2023Publication date: September 12, 2024Inventors: Alexandra Dana, Amit Shomer, Nadav Carmel, Tomer Peleg, Assaf Tzabari
-
Publication number: 20240303839Abstract: A method for evaluating provided data accuracy for an agricultural machine. The method includes obtaining image data from a camera on the agricultural machine, processing the image data for a time interval and identifying inadequate images, determining the number of inadequate images compared to the total number of images for the time interval, generating a camera confidence based on the ratio of inadequate images to total images for the time interval, and providing a feedback indicating the camera confidence.Type: ApplicationFiled: January 18, 2024Publication date: September 12, 2024Inventors: Carolyn R. Herman, Daniel B. Quinn, Franklin Lucas Sturgeon, Colin D. Engel, Matthew Orth, Hanna J. Wickman, Tucker Creger, Nicholas E. Vickers
-
Publication number: 20240303840Abstract: The disclosed method for generating a first depth map for a first frame of a video includes performing one or more operations to generate a first intermediate depth map based on the first frame and a second frame preceding the first frame within the video, performing one or more operations to generate a second intermediate depth map based on the first frame, and performing one or more operations to combine the first intermediate depth map and the second intermediate depth map to generate the first depth map.Type: ApplicationFiled: November 13, 2023Publication date: September 12, 2024Inventors: Chao LIU, Benjamin ECKART, Jan KAUTZ
-
Publication number: 20240303841Abstract: Disclosed are systems and techniques for capturing images (e.g., using a monocular image sensor) and detecting depth information. According to some aspects, a computing system or device can generate a feature representation of a current image and update accumulated feature information for storage in a memory based on a feature representation of a previous image and optical flow information of the previous image. The accumulated feature information can include accumulated image feature information associated with a plurality of previous images and accumulated optical flow information associated of the plurality of previous images. The computing system or device can obtain information associated with relative motion of the current image based on the accumulated feature information and the feature representation of the current image.Type: ApplicationFiled: December 13, 2023Publication date: September 12, 2024Inventors: Rajeev YASARLA, Hong CAI, Jisoo JEONG, Risheek GARREPALLI, Yunxiao SHI, Fatih Murat PORIKLI
-
Publication number: 20240303842Abstract: Provided is a technique advantageous for acquiring information based on a depth position of an observation target site in a biological tissue. A medical image processing apparatus includes: an image acquisition unit that acquires a fluorescence image obtained by imaging a biological tissue including a phosphor while irradiating the biological tissue with excitation light; and a depth position information acquisition unit that acquires depth position information related to a depth position of the phosphor on the basis of the fluorescence image. The depth position information acquisition unit acquires spread information indicating an image intensity distribution of the phosphor in the fluorescence image by analyzing the fluorescence image, and acquires the depth position information by collating the spread information with a spread function representing an image intensity distribution in the biological tissue.Type: ApplicationFiled: February 2, 2022Publication date: September 12, 2024Applicant: Sony Group CorporationInventors: Minori TAKAHASHI, Kentaro FUKAZAWA, Daisuke KIKUCHI
-
Publication number: 20240303843Abstract: A system for generating extended reality effects using image data of hands and a depth estimation model. The depth estimation model is trained using pairings of synthetic 2D image data with sets of depths and segmentation masks. An extended reality system captures image data of hands in a real-world scene and uses the image data and the depth estimation model to generate the extended reality effects. The extended reality effects are provided to a user during an extended reality experience.Type: ApplicationFiled: March 7, 2023Publication date: September 12, 2024Inventors: Riza Alp Guler, Dominik Kulon, Himmy Tam, Haoyang Wang
-
Publication number: 20240303844Abstract: A depth completion method of sparse depth map includes: acquiring a grayscale image and a sparse depth map corresponding to the grayscale image; obtaining a nearest neighbor interpolation (NNI) image and a Euclidean distance transform (EDT) image based on the sparse depth map; inputting the grayscale image, the NNI image, and the EDT image into a neural network model, thereby outputting a predicted residual map; and generating a predicted dense depth map according to the predicted residual map and the NNI image.Type: ApplicationFiled: March 7, 2023Publication date: September 12, 2024Inventors: Hong-Yu CHIU, Yi-Nung LIU
-
Publication number: 20240303845Abstract: A depth acquisition device includes a memory and a processor. The processor performs: acquiring timing information indicating a timing at which a light source irradiates a subject with infrared light; acquiring, from the memory, an infrared light image generated by imaging a scene including the subject with the infrared light according to the timing indicated by the timing information; acquiring, from the memory, a visible light image generated by imaging a substantially same scene as the scene of the infrared light image, with visible light from a substantially same viewpoint as a viewpoint of imaging the infrared light image at a substantially same time as a time of imaging the infrared light image; detecting a flare region from the infrared light image; and estimating a depth of the flare region based on the infrared light image, the visible light image, and the flare region.Type: ApplicationFiled: May 20, 2024Publication date: September 12, 2024Inventors: Satoshi SATO, Takeo AZUMA, Nobuhiko WAKAI, Kohsuke YOSHIOKA, Noritaka SHIMIZU, Yoshinao KAWAI, Takaaki AMADA, Yoko KAWAI, Takeshi MURAI, Hiroki TAKEUCHI
-
Publication number: 20240303846Abstract: A processor-implemented method includes obtaining a visual association feature indicating an association between a first image frame and a second image frame and a visual appearance feature indicating the same object appearance in the first image frame and the second image frame, constructing a visual reprojection constraint based on the visual association feature, constructing a visual feature metric constraint based on the visual appearance feature, and performing localization and mapping based on the visual reprojection constraint and the visual feature metric constraint.Type: ApplicationFiled: March 6, 2024Publication date: September 12, 2024Applicant: Samsung Electronics Co., Ltd.Inventors: Xiongfeng PENG, Zhihua LIU, Kyungboo JUNG, Myungjae JEON, Qiang WANG, Young Hun SUNG
-
Publication number: 20240303847Abstract: An example method includes: obtaining, by a depth sensor, depth data representing a distance to a target object and an environment of the target object; selecting a subset of the depth data representing a reference surface in the environment; comparing the subset of the depth data to a flatness condition; when the subset of the depth data does not meet the flatness condition: determining a faulty condition of the depth sensor, and outputting an indicator of the faulty condition of the depth sensor.Type: ApplicationFiled: March 8, 2023Publication date: September 12, 2024Inventors: Sumudu B. Abeysekara, Michael Wijayantha Medagama
-
Publication number: 20240303848Abstract: Electronic device for estimating a height of a human, the electronic device comprising: a processor configured to: obtain an image including at least a part of a representation of the human and reference information; input the image to a first neural network and obtain as output from the first neural network first information, the first information related to a plurality of keypoints in the body of the human; input the image to a second neural network and obtain as output from the second neural network second information, the second information related to the reference information; and estimate the height of the human based on the first information and the second information; and an output unit configured to output the estimated height.Type: ApplicationFiled: March 18, 2024Publication date: September 12, 2024Applicant: N.V. NutriciaInventors: Agathe Camille Foussat, Reyhan Suryaditama, Meenakshisundaram Palaniappan, Ruben Zado Hekster, Laurens Alexander Drapers
-
Publication number: 20240303849Abstract: A method and system for creating a high-fidelity depiction of a scene including an object within the scene are provided. The method includes uniformly illuminating a target surface of the object with light to obtain reflected, backscattered illumination. The method also includes sensing via a volumetric sensor, brightness of the surface due to a diffuse component of the backscattered illumination to obtain brightness information. Backscattered illumination from the target surface is inspected to obtain geometric measurements which include sensor noise. Rotationally and positionally invariant measured surface albedo including albedo noise of the object is computed based on the brightness and the geometric measurements. A machine-learning model such as a diffusion sensor model is applied to the geometric measurements and the measured surface albedo to remove the sensor noise and the albedo noise, respectively, to obtain a prediction of actual geometry and actual albedo, respectively, of the object.Type: ApplicationFiled: April 18, 2024Publication date: September 12, 2024Applicant: Liberty Robotics Inc.Inventors: G. Neil HAVEN, Fansheng MENG
-
Publication number: 20240303850Abstract: The invention relates to a method for determining a height (H) of a trailer (12) of a motor vehicle (10), in which at least one image of a ground area (22) of the motor vehicle (10) is recorded by means of at least one first optical recording device (14a, 14b) and in which the image is evaluated by means of an electronic computing device (20) of the motor vehicle (10), wherein a ground area (22a, 22b) to the side of the motor vehicle (10) is recorded, and a shadow boundary (G) of a shadow (24) of the trailer (12) on the ground area (22a, 22b) to the side is determined in the recording, and the height (H) is determined by means of the electronic computing device (20) as a function of a shadow length (L) of the shadow (24) relative to the motor vehicle (10). The invention also relates to an electronic computing device (20).Type: ApplicationFiled: February 13, 2024Publication date: September 12, 2024Inventors: Tobias Aurand, Markus Zimmer
-
Publication number: 20240303851Abstract: A method includes: receiving, by a processor set, context data from one or more Internet of Things (IoT) sensors; identifying, by the processor set, one or more objects in a frame of a video stream, thereby determining identified objects; classifying, by the processor set, the identified objects, thereby determining classified objects; prioritizing, by the processor set, the classified objects using the context data, thereby determining prioritized objects; selecting, by the processor set, an object from the prioritized objects; enhancing, by the processor set, the frame of the video stream based on the selected object; and rendering, by the processor set, the enhanced frame on a display of a visual enhancement device.Type: ApplicationFiled: March 6, 2023Publication date: September 12, 2024Inventors: Peng Hui JIANG, Yang LIANG, Terry James HOFFMAN, Su LIU
-
Publication number: 20240303852Abstract: A method of detecting vessels is described. A vessel is detected if a match exists between an object detection from each of two or more detectors 100, based on data obtained by the two or more detectors 100, which are selected from: an RF antenna 102; a light sensor 104; a radar detector 106; and an Automatic Identification System receiver 108. A method for detecting the presence of vessels at night is also described. The method comprises detecting a plurality of blobs within image data, applying an image mask to the image data to remove regions of the image data that are associated with having static objects and to retain the remaining portions of the image data, identifying one or more clusters of the detected plurality of blobs within the retained image data, and determining the presence of one or more vessels based on the one or more identified clusters.Type: ApplicationFiled: June 29, 2022Publication date: September 12, 2024Applicant: SIRIUS CONSTELLATION LTDInventors: Malcolm GLAISTER, Michael DOHLER
-
Publication number: 20240303853Abstract: The application provides a method, apparatus, device, storage medium and program product for target positioning. The method applied to a smart terminal includes: obtaining image data of a surrounding environment; generating a spatial image of the surrounding environment based on the image data; determining and obtaining the spatial image corresponding to a positioning target; generating anchor point information corresponding to the positioning target based on the corresponding spatial image; and determining a specific location of the positioning target using the spatial image and the anchor point information.Type: ApplicationFiled: March 8, 2024Publication date: September 12, 2024Inventors: Jia Guo, Chi Fang
-
Publication number: 20240303854Abstract: A target monitoring system includes: a camera, mounted in a ship; a detecting apparatus, mounted in the ship and detecting an actual position of a target present around the ship; an image recognizing unit, detecting an in-image position of the target included in an image imaged by the camera; a distance estimating unit, estimating a range of a distance from the ship to the target based on the in-image position of the target; and a target identifying unit, identifying the target detected from the image and the target detected by the detecting apparatus based on the range of the distance that is estimated and the actual position that is detected.Type: ApplicationFiled: May 16, 2024Publication date: September 12, 2024Applicant: FURUNO ELECTRIC CO., LTD.Inventor: Yuta TAKAHASHI
-
Publication number: 20240303855Abstract: The posture estimation apparatus includes a joint point detection unit that detects joint points of a person in an image, a reference point specifying unit that specifies a preset reference point for each person, an attribution determination unit uses a learning model that machine-learns the relationship between a pixel data and the unit vector of the vector starting from a pixel to the reference point, to obtain a relationship between the detected joint points and the reference point of the each person in the image for each detected joint point, and to calculate a score indicating the possibility that the joint point belongs to the person, to determine the person in the image to which the joint point belongs by using the score, a posture estimation unit that estimates the posture of the person based on the result of determination by the attribution determination unit.Type: ApplicationFiled: January 15, 2021Publication date: September 12, 2024Applicant: NEC CORPORATIONInventor: Yadong PAN
-
Publication number: 20240303856Abstract: A method for determining an ego pose of a mobile system and creating a surfel map of a surrounding area of the mobile system via an optimization problem represented by a factor graph includes the steps of: receiving environment sensor data generated by an environment sensor attached to the mobile system, wherein the environment sensor surveys the surrounding area of mobile system, and wherein the environment sensor data represent the surrounding area of the mobile system as a point cloud; generating surfels by converting the point cloud of the received environment sensor data into surfel data; identifying new surfels and known surfels in the generated surfels by comparing the surfel data with the surfel map; and adding a surfel factor for the known surfels to the factor graph and/or adding a surfel node and a surfel factor for the new surfels to the factor graph.Type: ApplicationFiled: March 5, 2024Publication date: September 12, 2024Inventors: Tobias BIESTER, Boris NEUBERT, Veith ROETHLINGSHOEFER
-
Publication number: 20240303857Abstract: A method is provided for advising placement for a speaker set in a room. A computerized device stores rules of speaker arrangement, acquires interior information of the room based on images of an interior of the room, and determines a seating location and a reference location in the room based on the interior information, so as to generate a speaker placement recommendation with respect to the placement of the speaker set in the room. A wearable display obtains the speaker placement recommendation from the computerized device and displays the speaker placement recommendation in augmented reality or virtual reality.Type: ApplicationFiled: March 20, 2023Publication date: September 12, 2024Inventor: Sunil M