Patents Issued in February 1, 2024
-
Publication number: 20240037748Abstract: Disclosed are systems and methods for classifying brain lesions based on single point in time imaging, methods for training a machine learning model for classifying brain lesions, and a method of predicting formation of brain lesions based on single point in time imaging. A method of classifying brain lesions based on single point in time imaging can include; accessing patient image data from a single point in time; providing the patient image data as an input to a brain lesion classification model; generating a classification for each of one or more lesions identified in the patient image data; and providing the classification for each of the one or more lesions for display on one or more display devices; wherein the brain lesion classification model is trained using subject image data for a plurality of subjects, the subject image data being captured at two or more points in time.Type: ApplicationFiled: October 10, 2023Publication date: February 1, 2024Applicant: BIOGEN MA INC.Inventors: Bastien CABA, Dawei LIU, Aurélien LOMBARD, Alexandre CAFARO, Elizabeth FISHER, Arie Rudolf GAFSON, Nikos PARAGIOS, Shibeshih Mitiku BELACHEW, Xiaotang Phoebe JIANG
-
Publication number: 20240037749Abstract: Systems and methods for detecting plants in a sequence of images are provided. A plant is predicted to be in a detection region in an image and the plant is tracked across multiple images. A tracker retains a memory of the plants past position and updates a tracking region for each subsequent image based on the memory and the new images, thus using temporal information to augment detection performance. The plant can be substantially stationary and exhibit growth between images. Tracking substantially stationary plants can improve detection of the plant between images relative to detection alone. The tracking region can be updated based on the substantially stationary position of the plant, for instance by combining the tracking region with further predictions of plant position in subsequent images. Combining can involve determining a union.Type: ApplicationFiled: December 7, 2021Publication date: February 1, 2024Inventors: Maryamsadat RASOULIDANESH, Hoda AGHAEI, Travis Godwin GOOD
-
Publication number: 20240037750Abstract: This disclosure describes one or more implementations of a panoptic segmentation system that generates panoptic segmented digital images that classify both known and unknown instances of digital images. For example, the panoptic segmentation system builds and utilizes a panoptic segmentation neural network to discover, cluster, and segment new unknown object subclasses for previously unknown object instances. In addition, the panoptic segmentation system can determine additional unknown object instances from additional digital images. Moreover, in some implementations, the panoptic segmentation system utilizes the newly generated unknown object subclasses to refine and tune the panoptic segmentation neural network to improve the detection of unknown object instances in input digital images.Type: ApplicationFiled: October 16, 2023Publication date: February 1, 2024Inventors: Jaedong Hwang, Seoung Wug Oh, Joon-Young Lee
-
Publication number: 20240037751Abstract: Three-dimensional (3D) depth imaging systems and methods are disclosed for dynamic container auto-configuration. A 3D-depth camera captures 3D image data of a shipping container located in a predefined search space during a shipping container loading session. An auto-configuration application determines a representative container point cloud and (a) loads an initial pre-configuration file that defines a digital bounding box having dimensions representative of the predefined search space and an initial front board area; (b) applies the digital bounding box to the container point cloud to remove front board interference data from the container point cloud based on the initial front board area; (c) generates a refined front board area based on the shipping container type; (d) generates an adjusted digital bounding box based on the refined front board area; and (e) generates an auto-configuration result comprising the adjusted digital bounding box containing at least a portion of the container point cloud.Type: ApplicationFiled: July 10, 2020Publication date: February 1, 2024Applicant: Zebra Technologies CorporationInventors: Lichen Wang, Yan Zhang, Kevin J. O'Connell
-
Publication number: 20240037752Abstract: Object driven event detection is disclosed for nodes in an environment. Video frames of interest are identified from the video streams of cameras in the environment. The video frames of interest are input, along with node positions for nodes in the area of coverage of the cameras, into a detection module. The output of the detection model, combined with the output of an event model, are used by a decision pipeline to make decisions and perform actions in the environment.Type: ApplicationFiled: July 27, 2022Publication date: February 1, 2024Inventors: Vinicius Michel Gottin, Pablo Nascimento da Silva, Paulo Abelha Ferreira
-
Publication number: 20240037753Abstract: A method for enabling a user to paint an image uploaded to a device is provided. The method includes uploading an image to the device, performing a depth estimation on the image to generate a depth map, and performing an edge detection on the image to generate a first edge map. The method also includes performing the edge detection on the depth map to generate a second edge map, performing a skeletonization function on the first edge map to generate a third edge map, and performing the skeletonization function on the depth map to generate a fourth edge map. The method also includes generating a final edge map using the third edge map and the fourth edge map, generating a colorized image by applying color to the final edge map to paint at least one wall of a scene, and displaying the colorized image on a display of the device.Type: ApplicationFiled: July 25, 2023Publication date: February 1, 2024Applicant: Behr Process CorporationInventors: Douglas MILSOM, Un Ho CHUNG, Michael ASKEW, Kiki TAKAKURA-MERQUISE
-
Publication number: 20240037754Abstract: A method for identifying a material boundary within volumetric image data is based on use of a model boundary transition function, which models the expected progression of voxel values across the material boundary, as a function of distance. Each voxel is taken in turn, and voxel values within a subregion surrounding the voxel are fitted to the model function, and the corresponding fitting parameters are derived, in addition to a parameter relating to quality of the model fit. Based on these parameters for each voxel, for each of at least a subset of the voxels, a candidate spatial point is identified, estimated to lie on the material boundary within the 3-D image dataset. The result is a cloud of candidate spatial points which spatially correspond to the outline of the boundary wall. Based on these, a representation of the boundary wall can be generated, for example a surface mesh.Type: ApplicationFiled: December 7, 2021Publication date: February 1, 2024Inventors: JÖRG SABCZYNSKI, RAFAEL WIEMKER, TOBIAS KLINDER
-
Publication number: 20240037755Abstract: An imaging device for imaging a sample includes an excitation unit configured to emit excitation light for exciting a first fluorophore attached to a first feature of the sample and at least a second feature of the sample, and a detection unit configured to receive fluorescence light from the first fluorophore, and generate at least one fluorescence image from the received fluorescence light. The imaging device further includes a controller configured to determine, based on an image segmentation, a first image region of the fluorescence image corresponding to the first feature and a second image region of the fluorescence image corresponding to the second feature, generate a first image based on the first image region and a second image based on the second image region, and/or generate a composite image comprising at least the first image region and the second image region.Type: ApplicationFiled: July 5, 2023Publication date: February 1, 2024Inventor: Falk Schlaudraff
-
Publication number: 20240037756Abstract: Apparatuses, systems, and techniques to track one or more objects in one or more frames of a video. In at least one embodiment, one or more objects in one or more frames of a video are tracked based on, for example, one or more sets of embeddings.Type: ApplicationFiled: May 5, 2023Publication date: February 1, 2024Inventors: De-An Huang, Zhiding Yu, Anima Anandkumar
-
Publication number: 20240037757Abstract: The present disclosure relates to a method, device and storage medium for post-processing in multi-target tracking. According to an embodiment of the present disclosure, the method comprises making attempts to split a tracklet indicative of a trajectory of a single target by performing operations of: determining a re-identification feature set of an image patch sequence by determining a re-identification feature of each image patch in the image patch sequence of the tracklet; determining whether a candidate identification switch image patch is present in the tracklet based on feature similarities of a plurality of re-identification feature pairs in the re-identification feature set; in a case where a determination result is “yes”, verifying whether it is credible that identification-switch has occurred at the candidate identification switch image patch; and in a case where a verification result is “credible”, splitting the tracklet into two tracklets based on the candidate identification switch image patch.Type: ApplicationFiled: July 11, 2023Publication date: February 1, 2024Applicant: Fujitsu LimitedInventors: Ping WANG, Liuan WANG, Jun SUN
-
Publication number: 20240037758Abstract: A target tracking method, a computer-readable storage medium, and a computer device. The method comprises: matching a target tracking box with target candidate boxes to determine a target candidate box in best matching with the target tracking box; matching one or more remaining target candidate boxes, except for the best matching target candidate box, with a second target candidate box detected previously to determine a corresponding matching relationship; according to the best matching target candidate box and the corresponding matching relationship, obtaining distances and overlapping relationships respectively between the best matching target candidate box and the one or more remaining target candidate boxes and between the best matching target candidate box and the second target candidate box, so as to determine a shielding relationship between a target and other objects in the current video frame; and determining, according to the shielding relationship, whether to restart target tracking.Type: ApplicationFiled: December 16, 2021Publication date: February 1, 2024Inventors: Wenjie Jiang, Rui Xu
-
Publication number: 20240037759Abstract: A target tracking method, device, a movable platform and a computer-readable storage medium are provided. The method includes: obtaining a first image containing a target to be tracked, and tracking the target to be tracked based on the first image; if the target to be tracked is lost, obtaining motion information of the target to be tracked when it is lost; based on the motion information, matching a target road area where the target to be tracked is located when it is lost in a vector map; and based on the motion information and the target road area, searching for the lost target to be tracked. The method improves the accuracy of target tracking.Type: ApplicationFiled: October 8, 2023Publication date: February 1, 2024Applicant: SZ DJI TECHNOLOGY CO., LTD.Inventor: Xuyang FENG
-
Publication number: 20240037760Abstract: In one aspect, it is provided a method comprising receiving an image of a target subject; determining a direction in response to the receipt of the image, the direction being one in which the target subject was likely to move during a time period in the past or is likely to move during a time period in the future; determining a target area within which another image of the target subject can be expected to appear based on the determined direction; and determining if a portion of a subsequent image is outside the determined target area to identify if the subsequent image is one relating to the target subject, wherein the subsequent image is one taken during the time period in the past or during the time period in the future.Type: ApplicationFiled: October 11, 2023Publication date: February 1, 2024Applicant: NEC CorporationInventors: Hui Lam ONG, Hong Yen ONG, Xinlai JIANG
-
Publication number: 20240037761Abstract: In multimedia object tracking and merging of tracked objects, an object is tracked through frames of multimedia content until a frame appears in which the tracked object is not detected. A first track is designated as one or more consecutive frames in which the tracked object is detected, the first track ending at the first frame. Tracking continues to try to detect the tracked object in a second frame subsequent to the first frame. If the tracked object is not again detected, information about the first track is output. If the tracked object is detected subsequently, a second track of consecutive tracked object detection is designated. The tracked objects in the two tracks are then compared with the aid of trained data models, and a matching score is determined to reflect the degree of match. If the matching score meets or exceeds a first threshold, the compared tracks are merged using the same identifier assigned to both tracks.Type: ApplicationFiled: July 29, 2022Publication date: February 1, 2024Inventors: Muhammad ADEEL, Thomas GUZIK
-
Publication number: 20240037762Abstract: The present disclosure is related to systems and methods for image processing. The method includes obtaining, from a first image set of a subject acquired at a first time point, a first image; obtaining, from a second image set of the subject acquired at a second time point, a second image, wherein the first image and the second image correspond to a same region of the subject; determining a displacement vector field based on the first image and the second image; and generating a target image based on the displacement vector field, the first image set and the second image set.Type: ApplicationFiled: October 8, 2023Publication date: February 1, 2024Applicant: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.Inventors: Zhongliang ZHANG, Guotao QUAN, Yuan BAO, Jianwei FU
-
Publication number: 20240037763Abstract: Systems and methods for dynamically tracking objects, and projecting rendered 3D content onto said objects in real-time. The methods described herein further include image data capture performed by various image-capturing devices, wherein said data is segmented into various components to identify one or more projectors for rendering and projecting 3D content components onto one or more objects.Type: ApplicationFiled: October 6, 2023Publication date: February 1, 2024Inventors: Robert G. LeGaye, Ian Coyle, Ian LeBlanc, Ben Purdy, Thomas Wester
-
Publication number: 20240037764Abstract: Systems and techniques are described herein for processing video data. In some examples, a process is described that can include obtaining a plurality of frames, determining a scene cut in the plurality of frames, and determining a smoothed histogram based on the determined scene cut. For instance, the process can include determining a first characteristic of at least a first frame of the plurality of frames and a second characteristic of at least a second frame of the plurality of frames, determining whether a difference between the first characteristic and the second characteristic is greater than a threshold difference, and determining the scene cut based a determination that the difference between the first characteristic and the second characteristic is greater than the threshold difference.Type: ApplicationFiled: September 15, 2021Publication date: February 1, 2024Inventors: Shang-Chih CHUANG, Zhongshan WANG, Yi-Chun LU
-
Publication number: 20240037765Abstract: Disclosed is a high-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method based on fringe projection. The method comprises: firstly, by means of the fringe projection technology based on a stereoscopic phase unwrapping method, and with the assistance of an adaptive dynamic depth constraint mechanism, acquiring high-precision three-dimensional (3D) data of an object in real time without any additional auxiliary fringe pattern; and then, after a two-dimensional (2D) matching points optimized by the means of corresponding 3D information is rapidly acquired, by means of a two-thread parallel mechanism, carrying out coarse registration based on Simultaneous Localization and Mapping (SLAM) technology and fine registration based on Iterative Closest Point (ICP) technology.Type: ApplicationFiled: August 27, 2020Publication date: February 1, 2024Applicant: NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Chao ZUO, Jiaming QIAN, Qian CHEN, Shijie FENG, Tianyang TAO, Yan HU, Wei YIN, Liang ZHANG, Kai LIU, Shuaijie WU, Mingzhu XU, Jiaye WANG
-
Publication number: 20240037766Abstract: Systems and methods for video georegistration are provided. An example method includes: receiving an input image; generating a plurality of templates from the input image; and generating a template queue based at least in part on the plurality of template scores. The plurality of templates are associated with a plurality of template scores. The template queue includes a set of selected templates. The method further includes receiving one or more reference images; determining a set of match scores for the set of selected templates by applying a matching algorithm to the set of selected templates and at least one reference image of the one or more reference images; evaluating the set of match scores to select a collection of templates, and generating an image transform based at least in part on the collection of templates. Each template of the collection of templates meets one or more selection criteria.Type: ApplicationFiled: July 26, 2023Publication date: February 1, 2024Applicant: Palantir Technologies Inc.Inventors: Joseph Driscoll, Aleksandr Patsekin, Ethan Van Andel, Mary Cameron, Miles Sackler, Robert Imig, Stephen Ramsey
-
Publication number: 20240037767Abstract: A 3D sensing system including a liquid crystal lens, a projector, an image sensor and a circuit. The projector provides a light beam to the liquid crystal lens which applies a pattern to the light beam to generate a structured light. The image sensor captures an image corresponding to the structured light. The circuit calculates first depth information according to the pattern and the image, and determines if the image satisfies a quality requirement. If the image does not satisfy the quality requirement, the pattern is modified and another image is captured for calculating second depth information. The first and second depth information are combined to generate a depth map.Type: ApplicationFiled: July 31, 2022Publication date: February 1, 2024Inventors: Min-Chian WU, Pen-Hsin CHEN, Ching-Wen WANG, Cheng-Che TSAI, Ting-Sheng HSU
-
Publication number: 20240037768Abstract: A computer-based system may quantify, based on the plurality of instances of a feature indicated by image data, an attribute (e.g., a color, a shape, a material, a texture, etc.) of the plurality of instances of the feature. The system may also quantify an attribute of an instance of the feature of the plurality of instances of the feature. The system may modify the image data to indicate the instance of the feature if/when a value of the quantified attribute of the instance of the feature exceeds a value of the quantified attribute of the plurality of instances of the feature by a threshold. Functionality (e.g., defective, non-defective, potentially defective, etc.) of the unit may be classified based on the modified image data.Type: ApplicationFiled: August 1, 2022Publication date: February 1, 2024Applicant: LG INNOTEK CO., LTD.Inventors: Frederick Seng, Harold HWANG, Brian PICCIONE, Kuen-Ting SHIU
-
Publication number: 20240037769Abstract: Systems and methods for predicting body part measurements of a user from depth images are disclosed. The method first receives a plurality of depth images of a body part of the user from an image-capturing device. Next, the method generates a plurality of individual point clouds based on the plurality of depth images. Next, the method stitches the plurality of individual point clouds into a stitched point cloud, and determines a measurement location based on the stitched point cloud. Finally, the method projects the measurement location to the stitched point cloud, and generates the body part measurement based on the projected measurement location. To determine the measurement location, one embodiment uses a morphed base 3D model, whereas another embodiment uses a 3D keypoint detection algorithm on the stitched point cloud. The method may be implemented on a mobile computing device with a depth sensor.Type: ApplicationFiled: December 22, 2021Publication date: February 1, 2024Applicant: Visualize K.K.Inventors: Bryan Hobson Atwood, Chong Jin Koh, Kyohei Kamiyama
-
Publication number: 20240037770Abstract: Embodiments of this application provide a measurement method and a measurement apparatus. The measurement method includes: acquiring a first image and a second image of a target object, where the first image is acquired by a camera located on a non-backlight side of the target object, and the second image is acquired by a camera located on a backlight side of target object; and measuring the target object for size information according to the first image and the second image. The technical solution of this application can improve accuracy and precision of inspection while improving production efficiency.Type: ApplicationFiled: April 5, 2023Publication date: February 1, 2024Inventors: Weilin ZHUANG, Guannan JIANG, Annan SHU
-
Publication number: 20240037771Abstract: A method and a system for determining a weight estimate of food items include receiving an image of an outer surface of a food item. The image having a plurality of pixels each having a measure. By the measure of the pixels, a surface content of a first tissue which is distinct from a second tissue is identified. The surface content is translated into a volume content and a density parameter is recorded. An estimate of the weight is determined based on the volume content, the density parameter, and volume data identifying the volume of the food item.Type: ApplicationFiled: December 17, 2022Publication date: February 1, 2024Inventors: Anders KJAER, Martin ANDERSEN
-
Publication number: 20240037772Abstract: A robot can be moved in a structure that includes a plurality of downward-facing cameras, and, as the robot moves, upward images can be captured with an upward-facing camera mounted to the robot. Downward images can be captured with the respective downward-facing cameras. Upward-facing camera poses can be determined at respective times based on the upward images. Further, respective poses of the downward-facing cameras can be determined based on (a) describing motion of the robot from the downward images, and (b) the upward-facing camera poses determined from the upward images.Type: ApplicationFiled: July 27, 2022Publication date: February 1, 2024Applicant: Ford Global Technologies, LLCInventors: Subodh Mishra, Punarjay Chakravarty, Sagar Manglani, Sushruth Nagesh, Gaurav Pandey
-
Publication number: 20240037773Abstract: A technique is described herein. The technique includes receiving endpoint location data in a geometry shader; processing the endpoint location data to calculate Bezier curve control points; and based on the Bezier curve control points, determining estimated electrode positions for the catheter.Type: ApplicationFiled: July 26, 2022Publication date: February 1, 2024Applicant: Biosense Webster (Israel) Ltd.Inventor: Assaf Govari
-
Publication number: 20240037774Abstract: A technique for determining the three-dimensional position of radiation interaction in a scintillator is disclosed. The method comprises detecting a scintillation event within a scintillator to produce a measured detector response, by using a photodetector that has a planar surface optically coupled to the scintillator and that has a plurality of pixels defined on the planar surface. The method further comprises calculating a spatial distribution of photons, resulting from the scintillation event, across the planar surface of the detector, and determining an angle-dependent quantum efficiency of the photodetector, associated with the scintillation event.Type: ApplicationFiled: July 29, 2022Publication date: February 1, 2024Inventor: Kenneth Edward Gregorich
-
Publication number: 20240037775Abstract: A method of creating training data for model training is provided. The method includes: receiving image data including at least one fashion item; performing location box labeling on an item location box which indicates a location of an item included in the image data by using an item location detection model; calculating a location box labeling result value and a location box labeling confidence value; receiving a user's location box review value for the location box labeling result value; determining a location noise value of the location box review value by using the item location detection model; and determining the location box review value as location box training data if the location noise value meets a predetermined first criterion.Type: ApplicationFiled: June 22, 2023Publication date: February 1, 2024Applicant: Omnious Co., Ltd.Inventors: Saad IMRAN, Hyung Won CHOI
-
Publication number: 20240037776Abstract: An analysis system includes: a first apparatus that detects a position of a person in a space in which an object is located; a second apparatus that is different from the first apparatus and detects a position of the object; and an analysis apparatus that analyzes a behavior of the person, wherein the analysis apparatus analyzes the behavior of the person in response to the object based on a relationship between the position of the person acquired from the first apparatus and the position of the object acquired from the second apparatus.Type: ApplicationFiled: August 23, 2021Publication date: February 1, 2024Inventors: Noriyasu YAMADA, Yasuhiro MISU, Shogo HARA
-
Publication number: 20240037777Abstract: An information processing apparatus includes an interface configured to acquire an image of a first user, information regarding posture or motion of the first user, and information regarding posture or motion of a second user, and a controller configured to generate difference information regarding a difference between the posture or motion of the first user and the posture or motion of the second user. The controller is configured to notify the second user of the difference information while controlling a display device to display the image of the first user or an image of the second user.Type: ApplicationFiled: July 20, 2023Publication date: February 1, 2024Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventor: Tatsuro HORI
-
Publication number: 20240037778Abstract: Systems and methods are provided for increasing accuracy of video analytics tasks in real-time by acquiring a video using video cameras, and identifying fluctuations in the accuracy of video analytics applications across consecutive frames of the video. The identified fluctuations are quantified based on an average relative difference of true-positive detection counts across consecutive frames. Fluctuations in accuracy are reduced by applying transfer learning to a deep learning model initially trained using images, and retraining the deep learning model using video frames. A quality of object detections is determined based on an amount of track-ids assigned by a tracker across different video frames. Optimization of the reduction of fluctuations includes iteratively repeating the identifying, the quantifying, the reducing, and the determining the quality of object detections until a threshold is reached. Model predictions for each frame in the video are generated using the retrained deep learning model.Type: ApplicationFiled: July 28, 2023Publication date: February 1, 2024Inventors: Kunal Rao, Giuseppe Coviello, Murugan Sankaradas, Oliver Po, Srimat Chakradhar, Sibendu Paul
-
Publication number: 20240037779Abstract: A position detection device includes processing circuitry: to receive an image captured by a monitoring camera, to execute a process for detecting a person in the image, and to output two-dimensional camera coordinates indicating a position of the detected person; to transform the two-dimensional camera coordinates to three-dimensional coordinates; to recognize a character string on a nameplate of a device in a wearable camera image captured by a wearable camera; to search a layout chart of the device for the recognized character string; to determine two-dimensional map coordinates based on a position where the character string is found when the recognized character string is found in the layout chart, and to calculate the two-dimensional map coordinates based on the three-dimensional coordinates when the recognized character string is not found in the layout chart; and to output image data in which position information is superimposed on a map.Type: ApplicationFiled: October 5, 2023Publication date: February 1, 2024Applicant: Mitsubishi Electric CorporationInventors: Takeo KAWAURA, Takahiro KASHIMA, Sohei OSAWA
-
Publication number: 20240037780Abstract: This application discloses an object recognition method performed by an electronic device. The method includes: simultaneously acquiring an infrared image and a visible image for a target object; obtaining depth information of reference pixel points in the infrared image relative to the target object; obtaining depth information of other pixel points in the infrared image relative to the target object according to position information of the reference pixel points in the infrared image and the depth information of the reference pixel points in the infrared image; aligning the pixel points in the infrared image with pixel points in the visible image based on the depth information of the pixel points in the infrared image; and performing object recognition on the target object based on the aligned infrared image and visible image, to obtain an object recognition result of the target object.Type: ApplicationFiled: October 10, 2023Publication date: February 1, 2024Inventors: Zheming HONG, Shaoming Wang, Runzeng Guo
-
Publication number: 20240037781Abstract: An electronic device according to an embodiment may include a camera and a processor. The processor may be configured to obtain, from an image obtained using the camera, two-dimensional coordinate values representing vertices of a portion related to an external object. The processor may be configured to identify a first line extending along a reference direction on a reference plane included in the image and overlapping a first vertex of the vertices, and a second line connecting a reference point within the image and a second vertex of the vertices. The processor may be configured to, based on an intersection of the first line and the second line, obtain three-dimensional coordinate values representing each of vertices of a three-dimensional external space corresponding to the external object.Type: ApplicationFiled: July 27, 2023Publication date: February 1, 2024Applicant: THINKWARE CORPORATIONInventor: Sukpil KO
-
Publication number: 20240037782Abstract: Systems, devices, media, and methods are presented for object modeling using augmented reality. An object modeling mode for generating three-dimensional models of objects is initiated by one or more processors of a device. The processors of the device detect an object within a field of view. Based on a position of the object, the processors select a set of movements forming a path for the device relative to the object and cause presentation of at least one of the movements. The processors detect a set of object surfaces as portions of the object are positioned in the field of view. In response to detecting at least a portion of the object surface, the processors modify a graphical depiction of a portion of the object. The processors then construct a three-dimensional model of the object from the set of images, depth measurements, and IMU readings collected during the reconstruction process.Type: ApplicationFiled: August 14, 2023Publication date: February 1, 2024Inventors: Ivan Kolesov, Alex Villanueva, Liangjia Zhu
-
Publication number: 20240037783Abstract: An information processing system includes: an acquisition unit that obtains a both-eye image, which is an image of a face containing both eyes, from a target; and a detection unit that detects an eye position of the target in the both-eye image on the basis of a result of learning that uses a one-eye image containing only one of the eyes and the both-eyes image. According to such an information processing system, the eye position of the target contained in the both-eye image can be detected with high accuracy.Type: ApplicationFiled: July 30, 2021Publication date: February 1, 2024Inventor: Yuka OGINO
-
Publication number: 20240037784Abstract: A method is provided for calibrating a structured light system which comprises a projector, a camera and at least one processor, wherein the projector emits light at an unknown pattern. The method comprises projecting by the projector an unknown pattern at at least two different distances relative to the camera's location, capturing by the camera the patterns projected at the different distances, determining vertical disparity between the captured images and estimating a relative orientation between the camera and the projector, thereby enabling calibration of the structured light system.Type: ApplicationFiled: July 29, 2022Publication date: February 1, 2024Inventor: Eliyahu HAYAT LEVI
-
Publication number: 20240037785Abstract: Provided is a method, which is performed by one or more processors, and includes receiving a first image captured at a specific location by a first monocular camera mounted on a robot, receiving a second image captured at the specific location by a second monocular camera mounted on the robot, receiving information on the specific location, detecting one or more location codes based on at least one of the first image or the second image, and estimating information on location of each of the one or more location codes based on the first image, the second image, and the information on the specific location.Type: ApplicationFiled: July 27, 2023Publication date: February 1, 2024Inventors: Seokhoon Jeong, Hoyeon Yu, Sucheol Lee, Seung Hoon Lee
-
Publication number: 20240037786Abstract: Search points in a search space may be projected onto images from cameras imaging different parts of the search space. Subimages, corresponding to the projected search points, may be selected and processed to determine if a target object has been detected. Based on subimages in which target objects are detected, as well as orientation data from cameras capturing images from which the subimages were selected, positions of the target objects in the search space may be determined.Type: ApplicationFiled: August 24, 2023Publication date: February 1, 2024Applicant: Science Applications International CorporationInventors: Stephen Eric Bramlett, Michael Harris Rodgers
-
Publication number: 20240037787Abstract: A corneal reflected image identifying device and gaze detection device enabling detection of a gaze with a high precision even when a person being monitored is wearing an object covering the eyes, in particular a corneal reflected image identifying device comprising a light projecting part 110 for projecting light from two light sources 110a, 110b which are spaced apart from each other toward an eye, a camera 120 for capturing an image of the eye on which light is projected, a distance calculating part 152d for using the captured image to calculate a distance between two reflected images for each combination of two reflected images among a plurality of reflected images obtained by light projected from the light sources 110a, 110b being reflected in a vicinity of the eye, and a corneal reflected image identifying part 152g for excluding a reflected image of light projected from the light sources 110a, 110b being reflected by an object covering the eye based on predetermined features and identifying a corneal rType: ApplicationFiled: July 25, 2023Publication date: February 1, 2024Inventor: Seiya Ishiguro
-
Publication number: 20240037788Abstract: An autoencoder may be trained to predict 3D pose labels using simulation data extracted from a simulated environment, which may be configured to represent an environment in which the 3D pose estimator is to be deployed. Assets may be used to mimic the deployment environment such as 3D models or textures and parameters used to define deployment scenarios and/or conditions that the 3D pose estimator will operate under in the environment. The autoencoder may be trained to predict a segmentation image from an input image that is invariant to occlusions. Further, the autoencoder may be trained to exclude areas of the input image from the object that correspond to one or more appendages of the object. The 3D pose may be adapted to unlabeled real-world data using a GAN, which predicts whether output of the 3D pose estimator was generated from real-world data or simulated data.Type: ApplicationFiled: October 16, 2023Publication date: February 1, 2024Inventors: Sravya Nimmagadda, David Weikersdorfer
-
Publication number: 20240037789Abstract: According to one embodiment, an estimation device includes an estimation unit, a control unit, and a SLAM unit. The estimation unit estimates whether motion of a moving object is pure rotation from a plurality of first monocular images acquired by a monocular camera mounted on the moving object. The control unit determines the translation amount for translating the monocular camera in a case in which the motion of the moving object is pure rotation. The SLAM unit estimates at least one of a location or an orientation of the monocular camera, and three-dimensional points of a subject of the monocular camera by monocular simultaneous localization and mapping (SLAM), from the first monocular images and a second monocular image acquired by the monocular camera that is translated based on the translation amount.Type: ApplicationFiled: March 9, 2023Publication date: February 1, 2024Applicant: Kabushiki Kaisha ToshibaInventor: Misato Nakamura
-
Publication number: 20240037790Abstract: A method comprises: receiving first output values from a first sensor of a vehicle, the first output values reflecting a position of an object external to the vehicle; receiving second output values from a second sensor of the vehicle, the second output values reflecting the position of the object; determining a measurement difference regarding the position of the object based on the first and second output values; and performing an action regarding at least one of the first or second sensors based on determining the measurement difference.Type: ApplicationFiled: June 29, 2023Publication date: February 1, 2024Inventor: Bei Yan
-
Publication number: 20240037791Abstract: Provided are an apparatus and method for generating a depth map by using a volumetric feature. The method may generate a single feature map for a base image included in a surround-view image by performing encoding and postprocessing on the base image, and generate a volumetric feature by encoding the single feature map with depth information and then projecting a result of the encoding into a three-dimensional space. In this method, a depth map of a surround-view image may be generated by using a depth decoder to decode a volumetric feature.Type: ApplicationFiled: July 20, 2023Publication date: February 1, 2024Inventors: Seong Gyun JEONG, Jung Hee KIM, Phuoc Tien Nguyen, Jun Hwa HUR
-
Publication number: 20240037792Abstract: Optical center is determined on a column-by-column and row-by-row basis by identifying brightest pixels in respective columns and rows. The brightest pixels in each column are identified and a line is fit to those pixels. Similarly, brightest pixels in each row are identified and a second line is fit to those pixels. The intersection of the two lines is the optical center.Type: ApplicationFiled: October 9, 2023Publication date: February 1, 2024Inventors: Hugh Phu Nguyen, Paul Kalapathy
-
Publication number: 20240037793Abstract: Systems, apparatus, and methods for piggyback camera calibration. Existing piggybacked capture techniques use a “beauty camera” and an “action camera” to capture raw footage. The user directly applies the EIS stabilization track of a piggybacked action camera to the cinematic footage to create desired stable footage. Unfortunately, since the action camera may have been slightly offset from the cinematic video camera, the EIS stabilization data will only roughly approximate the necessary corrections. In other words, the user must manually fine tune the corrections. The disclosed embodiments use a calibration sequence to estimate a physical offset between the beauty camera and the action camera. Then, the estimated physical offset can be used to calculate an offset camera orientation for stabilizing the beauty camera. The foregoing process can be performed in-the-field before actual capture. This allows the user to check their set-up and fix any issues before capturing the desired footage.Type: ApplicationFiled: July 27, 2022Publication date: February 1, 2024Applicant: GoPro, Inc.Inventors: Andrew Russell, Robert McIntosh
-
Publication number: 20240037794Abstract: Certain embodiments provide a method of generating a color-corrected ophthalmic image. The method comprises obtaining an ophthalmic image of an eye with a nuclear sclerotic cataract or a tinted intraocular lens (“IOL”). The method further comprises determining color-shift information associated with the ophthalmic image based on at least one of user input and by processing the obtained image, the color-shift information indicative of the extent to which color in the obtained image is to be corrected to at least partially remedy the effect of the nuclear sclerotic cataract or the tinted IOL on the color of the obtained image. The method further comprises color-correcting the ophthalmic image based on the color-shift information. The method further comprises generating the color-corrected ophthalmic image based on the color-correcting.Type: ApplicationFiled: July 25, 2023Publication date: February 1, 2024Inventor: Steven T. Charles
-
Publication number: 20240037795Abstract: A color calibration method includes acquiring first pictures in a first color space, a first picture being generated by one pure color; converting brightness of the first pictures from the first color space to a second color space to obtain second pictures in the second color space to display through a display device; acquiring photographed pictures by photographing the second pictures, the photographed pictures corresponding to a third color space; converting the photographed pictures from the third color space to the first color space to obtain photographed pictures in the first color space, and determining photographing color information corresponding to the photographed pictures in the first color space; and determining a difference between standard color information of the first color space and the photographing color information, and determining color calibration information according to the difference, the color calibration information being used for performing color calibration on a picture.Type: ApplicationFiled: October 14, 2023Publication date: February 1, 2024Inventor: Rui LI
-
Publication number: 20240037796Abstract: A method for coding includes: obtaining original point cloud data; creating an adaptive prediction list of the attribute information of the point cloud; selecting a prediction mode from the adaptive prediction list and predicting the attribute information of the point cloud, to obtain a predicted residual; and coding the prediction mode and the predicted residual, to obtain codestream information.Type: ApplicationFiled: May 18, 2022Publication date: February 1, 2024Inventors: Wei Zhang, Fuzheng Yang, Zexing Sun, Junyan Huo
-
Publication number: 20240037797Abstract: An image decoding device that: receives, from an image encoding device, a bitstream including encoded data of a plurality of feature maps for an image; decodes the plurality of feature maps using the bitstream; selects a first feature map from the plurality of decoded feature maps; outputs the first feature map to a first task processing device that executes a first task process based on the first feature map; selects a second feature map from the plurality of decoded feature maps; and outputs the second feature map to a second task processing device that executes a second task process based on the second feature map.Type: ApplicationFiled: October 16, 2023Publication date: February 1, 2024Inventors: Han Boon TEO, Chong Soon LIM, Chu Tong WANG, Kiyofumi ABE