Patents Issued in December 24, 2020
-
Publication number: 20200402237Abstract: Embodiments of the disclosure provide systems and methods for generating a diagnosis report based on a medical image of a patient. The system includes a communication interface configured to receive the medical image acquired by an image acquisition device. The system further includes at least one processor. The at least one processor is configured to detect a medical condition based on the medical image and automatically generate text information describing the medical condition. The at least one processor is further configured to construct the diagnosis report, where the diagnosis report includes at least one image view showing the medical condition and a report view including the text information describing the medical condition. The system also includes a display configured to display the diagnosis report.Type: ApplicationFiled: September 6, 2020Publication date: December 24, 2020Inventors: Qi Song, Hanbo Chen, Zheng Te, Youbing Yin, Junjie Bai, Shanhui Sun
-
Publication number: 20200402238Abstract: Provided is a medical image processing apparatus that generates a color image by using one type of specific color image obtained by imaging a subject with specific monochromatic light. The medical image processing apparatus (10) includes an image acquisition unit (medical image acquisition unit (11)) that acquires a specific color image (56) obtained by imaging a subject with specific monochromatic light, and a color image generation unit (12) that generates a color image from the specific color image by assigning the specific color image (56) to a plurality of color channels and adjusting a balance of each of the color channels.Type: ApplicationFiled: September 8, 2020Publication date: December 24, 2020Applicant: FUJIFILM CorporationInventor: Tatsuya AOYAMA
-
Publication number: 20200402239Abstract: The disclosure relates to systems and methods for evaluating a blood vessel. The method includes receiving image data of the blood vessel acquired by an image acquisition device, and predicting, by a processor, blood vessel condition parameters of the blood vessel by applying a deep learning model to the acquired image data of the blood vessel. The deep learning model maps a sequence of image patches on the blood vessel to blood vessel condition parameters on the blood vessel, where in the mapping the entire sequence of image patches contribute to the blood vessel condition parameters. The method further includes providing the blood vessel condition parameters of the blood vessel for evaluating the blood vessel.Type: ApplicationFiled: September 8, 2020Publication date: December 24, 2020Applicant: SHENZHEN KEYA MEDICAL TECHNOLOGY CORPORATIONInventors: Xin Wang, Youbing Yin, Kunlin Cao, Yuwei Li, Junjie Bai, Xiaoyang Xu
-
Publication number: 20200402240Abstract: A scanning window is used to scan an image frame of a sensor when doing object detection. In one approach, positions within the image frame are stored in memory. Each position corresponds to an object detection at that position for a prior frame of data. A first area of the image frame is determined based on the stored positions. When starting to analyze a new frame of data, the first area is scanned to detect at least one object. After scanning within the first area, at least one other area of the new image frame is scanned.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Inventor: Gil Golov
-
Publication number: 20200402241Abstract: Systems and methods are disclosed for anatomic structure segmentation in image analysis, using a computer system. One method includes: receiving an annotation and a plurality of keypoints for an anatomic structure in one or more images; computing distances from the plurality of keypoints to a boundary of the anatomic structure; training a model, using data in the one or more images and the computed distances, for predicting a boundary in the anatomic structure in an image of a patient's anatomy; receiving the image of the patient's anatomy including the anatomic structure; estimating a segmentation boundary in the anatomic structure in the image of the patient's anatomy; and predicting, using the trained model, a boundary location in the anatomic structure in the image of the patient's anatomy by generating a regression of distances from keypoints in the anatomic structure in the image of the patient's anatomy to the estimated boundary.Type: ApplicationFiled: September 8, 2020Publication date: December 24, 2020Applicant: Heartflow, Inc.Inventors: Leo GRADY, Peter Kersten PETERSEN, Michiel SCHAAP, David LESAGE
-
Publication number: 20200402242Abstract: Provided are a method and apparatus for analyzing an image, an electronic device, and a readable storage medium. The method includes: obtaining an image to be analyzed, the image including a target object; segmenting the image based on a pre-configured full convolution network to obtain multiple regions of the target object; obtaining a minimum circumscribed geometric frame of each region; extracting a feature of a corresponding region of each minimum circumscribed geometric frame based on a pre-configured convolution neural network and connecting the features of the corresponding regions of the minimum circumscribed geometric frames to obtain a target object feature of the target object; and comparing the target object feature against an image feature of each image in a pre-stored image library and outputting an image analysis result for the image to be analyzed according to a comparison result.Type: ApplicationFiled: August 13, 2018Publication date: December 24, 2020Inventor: Lei Zhang
-
Publication number: 20200402243Abstract: Techniques related to video background estimation inclusive of generating a final background picture absent foreground objects based on input video are discussed. Such techniques include generating first and second estimated background pictures using temporal and spatial background picture modeling, respectively, and fusing the first and second estimated background pictures based on first and second confidence maps corresponding to the first and second estimated background pictures to generate the final estimated background picture.Type: ApplicationFiled: September 3, 2020Publication date: December 24, 2020Applicant: Intel CorporationInventors: Itay Benou, Yevgeny Priziment, Tzachi Herskovich
-
Publication number: 20200402244Abstract: A method of registering three-dimensional (3D) point clouds may include obtaining a first 3D point cloud acquired at a first location; obtaining a second 3D point cloud acquired at a second location; calculating a first normal vector for each point of the first 3D point cloud to create a plurality of normal vectors; calculating, for each point of the first 3D point cloud, a normal deviation amount of the corresponding normal vector to other normal vectors in a predetermined neighborhood of the point; selecting, from the first 3D point cloud, a first registration region based on whether the normal deviation amount of each point meets a deviation threshold; and registering the first 3D point cloud and the second 3D point cloud to create the composite 3D point cloud, the registration utilizing the first registration region in place of the first 3D point cloud.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Inventor: Daniel Flohr
-
Publication number: 20200402245Abstract: The present disclosure relates to a patient motion tracking system for automatic generation of a region of interest on a 3D surface of a patient positioned in a radiotherapy treatment room. More particularly, the disclosure relates to an assistive approach of a motion tracking system, by which a region of interest (ROI) is automatically generated on a generated 3D surface of the patient. Furthermore, a method for automatically generating a ROI on the 3D surface of the patient is described. In particular, all the embodiments refer to systems integrating methods for automatic ROI generation in a radiotherapy treatment setup.Type: ApplicationFiled: June 23, 2020Publication date: December 24, 2020Applicant: Vision RT LimitedInventor: Kevin KERAUDREN
-
METHOD AND APPARATUS FOR PREDICTING DEPTH COMPLETION ERROR-MAP FOR HIGH-CONFIDENCE DENSE POINT-CLOUD
Publication number: 20200402246Abstract: Methods and systems may be used for obtaining a high-confidence point-cloud. The method includes obtaining three-dimensional sensor data. The three-dimensional sensor data may be raw data. The method includes projecting the raw three-dimensional sensor data to a two-dimensional image space. The method includes obtaining sparse depth data of the two-dimensional image. The method includes obtaining a predicted depth map. The predicted depth map may be based on the sparse depth data. The method includes obtaining a predicted error-map. The predicted error map may be based on the sparse depth data. The method includes outputting a high-confidence point-cloud. The high-confidence point-cloud may be based on the predicted depth map and the predicted error-map.Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Applicant: Great Wall Motor Company LimitedInventors: Hamid Hekmatian, Samir Al-Stouhi, Jingfu Jin -
Publication number: 20200402247Abstract: One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.Type: ApplicationFiled: May 11, 2020Publication date: December 24, 2020Inventors: Kevin Karsch, Rajinder Sodhi, Brett Jones, Pulkit Budhiraja, Phil Reyneri, Douglas Rieck, Andrew Kilkenny, Ehsan Noursalehi, Derek Nedelman, Laura LaPerche, Brittany Factura
-
Publication number: 20200402248Abstract: Embodiments generally relate to a machine-implemented method of automatically adjusting the range of a depth data recording executed by at least one processing device. The method comprises determining, by the at least one processing device, at least one position of a subject to be recorded; determining, by the at least one processing device, at least one spatial range based on the position of the subject; receiving depth information; and constructing, by the at least one processing device, a depth data recording based on the received depth information limited by the at least one spatial range.Type: ApplicationFiled: May 29, 2020Publication date: December 24, 2020Inventors: Glen Siver, David Gregory Jones
-
Publication number: 20200402249Abstract: An object distance measurement apparatus may include: a camera to capture an image of an area around a vehicle; a distance sensor to detect a distance from an object by scanning around the vehicle; and a distance measurement unit that detects a vehicle moving distance using vehicle information generated by operation of the vehicle, and measures the distance from the object in response to each of frames between scan periods of the distance sensor, among frames of the image, based on the vehicle moving distance and the location pixel coordinates of the object within the images before and after the vehicle moves.Type: ApplicationFiled: June 22, 2020Publication date: December 24, 2020Applicant: HYUNDAI AUTRON CO., LTD.Inventor: Kee-Beom KIM
-
Publication number: 20200402250Abstract: A system includes a neural network implemented by one or more computers, in which the neural network includes an image depth prediction neural network and a camera motion estimation neural network. The neural network is configured to receive a sequence of images. The neural network is configured to process each image in the sequence of images using the image depth prediction neural network to generate, for each image, a respective depth output that characterizes a depth of the image, and to process a subset of images in the sequence of images using the camera motion estimation neural network to generate a camera motion output that characterizes the motion of a camera between the images in the subset. The image depth prediction neural network and the camera motion estimation neural network have been jointly trained using an unsupervised learning technique.Type: ApplicationFiled: September 3, 2020Publication date: December 24, 2020Inventors: Anelia Angelova, Martin Wicke, Reza Mahjourian
-
Publication number: 20200402251Abstract: Disclosed is an electronic device including a learning model having been learned according to an artificial intelligence algorithm. An electronic device according to the present disclosure may comprise: an input unit; and a processor which, when a two-dimensional image including at least one object is received through the input unit, acquires first depth information relating to at least one object by applying the two-dimensional image to a first learning model, acquires second depth information relating to the at least one object by applying the first depth information and actually measured depth data of the at least one object to a second learning model, and acquires three-dimensional information relating to the two-dimensional image on the basis of the second depth information, wherein the first depth information is implemented to include depth data according to a type of the at least one object.Type: ApplicationFiled: January 3, 2019Publication date: December 24, 2020Inventors: Daehyun BAN, Woojin PARK, Seongwon HAN
-
Publication number: 20200402252Abstract: Using data about the geometry of the wafer, the geometry of the wafer is measured along at least three diameters originating at different points along a circumference of the wafer. A characterization of the geometry of the wafer is determined using the three diameters. A probability of wafer clamping failure for the wafer can be determined based on the characterization.Type: ApplicationFiled: June 9, 2020Publication date: December 24, 2020Inventors: Shivam Agarwal, Priyank Jain, Yuan Zhong, Chiou Shoei Chee
-
Publication number: 20200402253Abstract: A method and apparatus for estimating a user's head pose relative to a sensing device. The sensing device detects a face of the user in an image. The sensing device further identifies a plurality of points in the image corresponding to respective features of the detected face. The plurality of points includes at least a first point corresponding to a location of a first facial feature. The sensing device determines a position of the face relative to the sensing device based at least in part on a distance between the first point in the image and one or more of the remaining points. For example, the sensing device may determine a pitch, yaw, distance, or location of the user's face relative to the sensing device.Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Inventors: Boyan IVANOV BONEV, Utkarsh GAUR
-
Publication number: 20200402254Abstract: A system is for measuring the deformations of at least one element of at least one examined construction, with at least one marker (1) fixedly attached to an element of the examined construction, at least one sensor (2) configured and programmed to record data related to the position of the marker (1) in the form of digital data, a processing unit (4) configured and programmed to process the data related to the position of the marker (1), connected communicatively to the sensor (2), preferably via a receiving unit (3), characterised in that the marker (1) comprises at least ten light-emitting characteristic points. A method is also provided for measuring the deformations of the examined construction implemented in such a system.Type: ApplicationFiled: October 14, 2019Publication date: December 24, 2020Inventor: Monika Karolina Murawska
-
Publication number: 20200402255Abstract: Medical imaging systems and methods for representing a 3D volume containing at least one foreign object introduced into a tissue. Imaging methods may include provision of a 3D volume containing voxels of at least one foreign object and voxels of tissue surrounding the at least one foreign object, identification of the voxels of the at least one foreign object by application of a processing rule, segmentation of the voxels of the at least one foreign object from the voxels of the tissue surrounding the at least one foreign object while maintaining the 3D volume, generation of a synthetic volume from a residual volume and the volume of the at least one foreign object, and representation of the synthetic volume on a display device using a windowing system.Type: ApplicationFiled: May 5, 2020Publication date: December 24, 2020Inventors: Thomas König, Klaus Hörndler, Eva-Maria Ilg, Christof Fleischmann, Lars Hillebrand
-
Publication number: 20200402256Abstract: The present disclosure relates to a control device and a control method, a program, and a mobile object that enable distinction among positions and accurate estimation of a self-position even in an environment where different positions include many similar feature amounts in surroundings and are thus likely to be erroneously detected as being the same position. In accordance with a place corresponding to the self-position, an image feature amount is extracted from an image of surroundings to which a mask has been added on the basis of a place-related non-feature portion representing an area, in the image of the surroundings, that is not useful for identifying the self-position, and the image feature amount and positional information regarding the self-position are associated with each other and registered in a position/image feature amount database (DB).Type: ApplicationFiled: December 14, 2018Publication date: December 24, 2020Inventors: DAI KOBAYASHI, RYO WATANABE
-
Publication number: 20200402257Abstract: The present invention relates to a device for calculating a vehicle trailer pose using a camera, the device comprising: a camera which is arranged offset from a tow bar position of the vehicle and which is configured to capture an image of the trailer; a memory which is configured to provide data i) at least one intrinsic parameter of the camera; ii) at least one extrinsic parameter of the camera; iii) at least one predefined tow bar position; iv) an first image taken from the camera showing the vehicle trailer at a first pose wherein the camera is configured to capture a second image showing the trailer at a pose to be determined; a processor which is configured to provide image analysis and determine at least one feature correspondence between the first image and the second image and calculate a change in the trailer pose between a first pose of the first image and a second pose of the second image based on the determined correspondence.Type: ApplicationFiled: September 8, 2020Publication date: December 24, 2020Applicant: CONTINENTAL AUTOMOTIVE GMBHInventor: Yonggang Jin
-
Publication number: 20200402258Abstract: A seed camera disposed a first location is manually calibrated. A second camera, disposed at a second location, detects a physical marker based on predefined characteristics of the physical marker. The physical marker is located within an overlapping field of view between the seed camera and the second camera. The second camera is calibrated based on a combination of the physical location of the physical marker, the first location of the seed camera, the second location of the second camera, a first image of the physical marker generated with the seed camera, and a second image of the physical marker generated with the second camera.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Inventor: Chandan Gope
-
Publication number: 20200402259Abstract: An image calibrating method is applied to a first monitoring image and a second monitoring image partly overlapped with each other. The image calibrating method includes detecting a plurality of first marking points and second marking points about a target object on the first monitoring image and the second monitoring image, computing a first trace and a second trace formed by the first marking points and the second marking points, setting a plurality of first estimating points and second estimating points on stretching sections on the first trace and the second respectively within the second monitoring image and the first monitoring image, and utilizing the first marking points and the second estimating points and/or the first estimating points and the second marking points to compute a shift between the first monitoring image and the second monitoring image.Type: ApplicationFiled: November 1, 2019Publication date: December 24, 2020Inventor: Cheng-Chieh Liu
-
Publication number: 20200402260Abstract: An apparatus that calibrates a parametric mapping that maps between object points and image points. The apparatus captures an image of a calibration pattern including features defining object points. The apparatus determines, from the image, measured image points that correspond to the object points. The apparatus determines, from the mapping, putative image points that correspond to the object points. The apparatus minimizes a cumulative cost function dependent upon differences between the measured image points and putative image points to determine parameters of the parametric mapping. The mapping uses a parametric function to specify points where light rays travelling from object points to image points cross the optical axis.Type: ApplicationFiled: March 14, 2018Publication date: December 24, 2020Inventors: Martin SCHRADER, Radu Ciprian BILCU, Adrian BURIAN
-
Publication number: 20200402261Abstract: Techniques of compressing level of detail (LOD) data involve generating a codec that can perform progressive refinement on a single rate decoded LOD. Nevertheless, by generating a small amount of extra information in a single rate decoded LOD, a progressive refiner can use the information provided in the single rate decoded LOD to refine the LOD. For example, in some implementations, the extra information is a corner of a face of a mesh; the progressive decoder may then begin traversal of the mesh from that corner for refinement. It is noted that the single rate decoded LODs are able to be refined by the same refinement information as the progressively decoded LODs.Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Inventor: Michael Hemmer
-
Publication number: 20200402262Abstract: A device and method used to image cylindrical fluid conduits, such as pipes, wellbores and tubulars, with ultrasound transducers then compress that data for storage or visualization. The compressed images may be stored on the tool and/or transmitted over telemetry, enabling the device to inspect and record long pipes or wells in high resolution on a single trip. This allow the ultrasound imaging tool to record much longer wells in higher resolution than would otherwise be possible. An outward-facing radial array of ultrasound transducers captures cross-sectional slices of the conduit to create frames from scan lines. The frames are compressed by applying a demodulation process and spatial conversion process to the scan lines. Video compression is applied to the to the demodulated, spatially converted ultrasound images to return compressed images.Type: ApplicationFiled: June 24, 2020Publication date: December 24, 2020Applicant: DarkVision Technologies Inc.Inventor: Steven Wrinch
-
Publication number: 20200402263Abstract: An example image device includes a compressor to compress image data from a row-and-column format into non-overlapping tiles including blocks of pixels, a processor to write the blocks of pixels one tile at a time in a column-wise manner across an image strip to create image data, and an on-chip memory to store the image data.Type: ApplicationFiled: March 19, 2018Publication date: December 24, 2020Inventors: Bradley R Larson, John Harris, Eugene A Roylance, Mary T Prenn, Paul N Ballard, Trace A Griffiths
-
Publication number: 20200402264Abstract: According to one implementation, a system for validating media content includes a computing platform having a hardware processor and a system memory storing a media content validation software code. The hardware processor is configured to execute the media content validation software code to search the media content for a geometrically encoded metadata structure. When the geometrically encoded metadata structure is detected, the hardware processor is further configured to execute the media content validation software code to identify an original three-dimensional (3D) geometry of the detected geometrically encoded metadata structure, to extract metadata from the detected geometrically encoded metadata structure, decode the metadata extracted from the detected geometrically encoded metadata structure based on the identified original 3D geometry, and obtain a validation status of the media content based on the decoded metadata.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Inventors: Steven M. Chapman, Todd P. Swanson, Mehul Patel, Joseph Popp, Ty Popko
-
Publication number: 20200402265Abstract: Disclosed herein is an image processing apparatus including a hand recognizing section configured to recognize a state of a hand of a user, an item image superimposing section configured to superimpose images of items as selection targets attached to the fingers of the hand that are assigned to the items on either an image of the hand being displayed or an image representing the hand being displayed, and a selecting operation detecting section configured to detect that one of the items is selected on the basis of a hand motion performed on the images of the items before performing processing corresponding to the detected selection.Type: ApplicationFiled: June 12, 2020Publication date: December 24, 2020Applicant: Sony Interactive Entertainment Inc.Inventor: Masashi NAKATA
-
Publication number: 20200402266Abstract: An information processing system includes a terminal device and an information processing device. The terminal device captures an image of at least a portion of a printed board and transmits the captured image of the at least a portion of the printed board to the information processing device. Based on the captured image and design information items about a plurality of elements included in the printed board, the information processing device extracts design information items about one or more elements constituting the at least a portion of the printed board, and generates an image in which images based on the design information items about the one or more elements are superimposed on the captured image. The information processing device transmits the generated image to the terminal device. The terminal device displays the generated image, received from the information processing device, on a display of the terminal device.Type: ApplicationFiled: March 14, 2018Publication date: December 24, 2020Applicant: Mitsubishi Electric CorporationInventor: Kimihiko KAWAMOTO
-
Publication number: 20200402267Abstract: A driving support device includes processing circuitry to judge a target object that is a real object existing in the vicinity of the vehicle and should be paid attention to by the driver; to generate a visual attraction stimulation image that appears to move from a position farther than the target object towards a position where the target object exists; to cause a display device to display the visual attraction stimulation image; and to receive body information from a body information detector acquiring the body information on the driver, to calculate a value indicating body reaction to visual stimulation sensed by the driver, and to correct a display parameter that determines display condition of the visual attraction stimulation image based on the value indicating the body reaction so as to change a degree of the visual stimulation given to the driver by the visual attraction stimulation image.Type: ApplicationFiled: September 4, 2020Publication date: December 24, 2020Applicant: Mitsubishi Electric CorporationInventor: Jumpei HATO
-
Publication number: 20200402268Abstract: A driving support device for supporting driving performed by a driver of a vehicle, includes processing circuitry to judge a target object that is a real object existing in a vicinity of the vehicle and should be paid attention to by the driver, based on vicinity information acquired by a vicinity detector that captures an image of or detects a real object existing in the vicinity of the vehicle; to generate a visual attraction stimulation image that appears to move from a position farther than the target object towards a position where the target object exists; and to cause a display device that displays an image in superimposition on the real object to display the visual attraction stimulation image.Type: ApplicationFiled: September 4, 2020Publication date: December 24, 2020Applicant: Mitsubishi Electric CorporationInventor: Jumpei HATO
-
Publication number: 20200402269Abstract: A camouflage pattern is provided that appears to have infinite focus and depth of field even at 100 percent size for the elements in the camouflage pattern. Generally, three-dimensional (3D) models of elements to be used in the camouflage pattern are captured or generated. The models are then arranged in a scene with a background (e.g., an infinite background) via 3D graphics editing programs such as is used to render computer generated graphics in video games and movies. A two-dimensional (2D) capture of the scene thus shows all visible surfaces of the elements in the scene in focus at all depths of field. The elements may or may not be shaded by one another from the perspective of the image capture location in the 3D environment.Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Inventor: J. Patrick Epling
-
Publication number: 20200402270Abstract: Systems, apparatuses and methods may provide for technology that determines a stencil value and uses the stencil value to control, via a stencil buffer, a coarse pixel size of a graphics pipeline. Additionally, the stencil value may include a first range of bits defining a first dimension of the coarse pixel size and a second range of bits defining a second dimension of the coarse pixel size. In one example, the coarse pixel size is controlled for a plurality of pixels on a per pixel basis.Type: ApplicationFiled: July 2, 2020Publication date: December 24, 2020Inventors: Karthik Vaidyanathan, Prasoonkumar Surti, Hugues Labbe, Atsuo Kuwahara, Sameer KP, Jonathan Kennedy, Murali Ramadoss, Michael Apodaca, Abhishek Venkatesh
-
Publication number: 20200402271Abstract: A computer implemented method for determining a two dimensional DRR referred to as dynamic DRR based on a 4D-CT, the 4D-CT describing a sequence of three dimensional medical computer tomographic images of an anatomical body part of a patient, the images being referred to as sequence CTs, the 4D-CT representing the anatomical body part at different points in time, the anatomical body part comprising at least one primary anatomical element and secondary anatomical elements, the computer implemented method comprising the following steps: acquiring the 4D-CT; acquiring a planning CT, the planning CT being a three dimensional image used for planning of a treatment of the patient, the planning CT being acquired based on at least one of the sequence CTs or independently from the 4D-CT, acquiring a three dimensional image, referred to as undynamic CT, from the 4D-CT, the undynamic CT comprising at least one first image element representing the at least one primary anatomical element and second image elements represenType: ApplicationFiled: September 2, 2020Publication date: December 24, 2020Inventors: Kajetan BERLINGER, Birte DOMNIK, Elisa Garcia CORSICO, Pascal BERTRAM
-
Publication number: 20200402272Abstract: The present invention relates to a method and system for automatically setting a scan range. The method comprises: receiving an RGB image and a depth image of an object positioned on a scan table, respectively, by an RGB image prediction model and a depth image prediction model; generating an RGB prediction result based on the RGB image and a depth prediction result based on the depth image with respect to predetermined key points of the object, respectively, by the RGB image prediction model and the depth image prediction model; selecting a prediction result for setting the scan range from the RGB prediction result and the depth prediction result; and automatically setting the scan range based on the selected prediction result.Type: ApplicationFiled: June 2, 2020Publication date: December 24, 2020Inventors: Yanran XU, Fanbo MENG, Yu HUANG
-
Publication number: 20200402273Abstract: A method for observing a sample, the sample lying in a sample plane defining radial positions, parameters of the sample being defined at each radial position, the method comprising: a) illuminating the sample using a light source, emitting an incident light wave that propagates toward the sample; b) acquiring, using an image sensor, an image of the sample, said image being formed in a detection plane, the sample being placed between the light source and the image sensor; c) processing the image acquired by the image sensor, so as to obtain an image of the sample, the image of the sample corresponding to a distribution of at least one parameter of the sample describing the sample in the sample plane; wherein the processing of the acquired image comprises implementing an iterative method, followed by applying a supervised machine learning algorithm, so as to obtain an initialization image intended to initialize the iterative method.Type: ApplicationFiled: June 22, 2020Publication date: December 24, 2020Applicant: Commissariat a l'energie atomique et aux energies alternativesInventors: Lionel Herve, Cedric Allier
-
Publication number: 20200402274Abstract: The invention discloses a limited-angle CT reconstruction method based on Anisotropic Total Variation. According to the method, through an image reconstruction model using low dose and sparse-view-angle CT images, a fast iterative reconstruction algorithm is combined with an Anisotropic Total Variation method. The problems that in an existing limited-angle CT reconstruction method are effectively solved, such as partial boundary ambiguity, slow convergence speed and unable to accurately solve. In the process of solving the model, the slope filter is introduced in the Filtered Back-Projection to preprocess the iterative equation, and the Alternating Projection Proximal is used to solve the iterative equation, and the iteration is repeated until the termination condition is satisfied; the experimental comparison with the existing reconstruction methods shows that the invention can achieve better reconstruction effect.Type: ApplicationFiled: November 20, 2019Publication date: December 24, 2020Inventors: HUAFENG LIU, TING WANG
-
Publication number: 20200402275Abstract: A method for artifact correction in computed tomography, the method comprising: (1) acquiring a plurality of data sets associated with different X-ray energies (i.e., D1, D2, D3 . . . Dn); (2) generating a plurality of preliminary images from the different energy data sets acquired in Step (1) (i.e., I1, I2, I3 . . . In); (3) using a mathematical function to operate on the preliminary images generated in Step (2) to identify the sources of the image artifact (i.e., the artifact source image, or ASI, where ASI=f(I1, I2, I3 . . . In)); (4) forward projecting the ASI to produce ASD=fp(ASI); (5) selecting and combining the original data sets D1, D2, D3 . . . Dn in order to produce a new subset of the data associated with the artifact, whereby to produce the artifact reduced data, or ARD, where ARD=f(ASD, D1, D2, D3 . . . Dn); (6) generating a repaired data set (RpD) to keep low-energy data in artifact-free data and introduce high-energy data in regions impacted by the artifact, where RpD=f(ARD, D1, D2, D3 . . .Type: ApplicationFiled: February 25, 2020Publication date: December 24, 2020Inventor: Matthew Len Keeler
-
Publication number: 20200402276Abstract: An event process data integration and analysis apparatus of the present disclosure includes an integrated display output interface. The integrated display output interface generates, based on unit operation data, a unit operation band for each device among a plurality of devices, the unit operation band representing an operation intention of an operator and being arranged in a time series. The integrated display output interface generates, based on process data, a process trend chart for each device, the process trend chart representing a change over time in a process value. The integrated display output interface generates an integrated display that displays the unit operation band and the process trend chart associated by time.Type: ApplicationFiled: June 17, 2020Publication date: December 24, 2020Applicant: YOKOGAWA ELECTRIC CORPORATIONInventors: Ayako Akimoto, Yuichi Sakuraba
-
Publication number: 20200402277Abstract: A time series data display device includes a display unit that outputs display data regarding the time series data. The display unit includes a general display generation unit that generates general display data for general display of a general tendency of the time series data and a detailed display generation unit that generates detailed display data for detailed display of detailed individual values of the time series data. The general display displays frequency or density of individual data configuring the time series data at each position of the time series data on a rendering plane, with a visual effect in accordance with the frequency or the density.Type: ApplicationFiled: June 17, 2020Publication date: December 24, 2020Applicant: Fanuc CorporationInventor: Yasuhiro Shibasaki
-
Publication number: 20200402278Abstract: The purpose of the present invention is to correct more accurate marketing information. A flow line display system of the present invention includes image-capturing unit, information operation device and display unit. The image-capturing unit captures an image. The information operation device detects an object from the image and identifies a flow line of the object, an orientation of the object, and a time related to the orientation. The display unit displays the orientation of the object and the time related to the orientation together with the flow line of the object.Type: ApplicationFiled: July 1, 2020Publication date: December 24, 2020Applicant: Nec CorporationInventors: Shigetsu Saito, Jun Kobayashi
-
Publication number: 20200402279Abstract: Embodiments may be used to evaluate completed inspection jobs using updated pipe segment data obtained by inspecting a rehabilitated pipe after completion of a project. One embodiment provides a method of generating an infrastructure project summary, including: collecting, using one or more sensors of an inspection robot, pipe segment data relating to the one or more pipe segments; the second pipe segment data comprising one or more of laser condition assessment data and sonar condition assessment data; generating infrastructure summary data for at least a part of the network using the pipe segment data, comparing, using a processor, first and second infrastructure summary data; generating, using the processor, a parameter of the infrastructure project summary based on the comparing; and including the parameter of the infrastructure project summary in a project summary report. Other embodiments are disclosed and claimed.Type: ApplicationFiled: August 21, 2020Publication date: December 24, 2020Applicant: RedZone Robotics, Inc.Inventors: Subramanian Vallapuzha, Eric C. Close
-
Publication number: 20200402280Abstract: Methods, systems, and media are provided for redacting images using augmented reality. An image, such as a photograph or video, may be captured by a camera. A redaction marker, such as a QR code, may be included in the field of view of the camera. Redaction instructions may be interpreted from the redaction marker. The redaction instructions indicate a portion of a real-world environment that is to be redacted, such as an area that is not allowed to be imaged. Based on the redaction instructions, image data corresponding to the image may be redacted by deleting or encrypting a portion of the data associated with the portion of the real-world environment to be redacted. An image may be rendered using the redacted image data. The redacted image may be displayed or stored.Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Inventor: Andrew Michael Lowery
-
Publication number: 20200402281Abstract: An image processing apparatus includes: a first identifying unit configured to identify image-capturing conditions concerning a position and an orientation of an image-capturing apparatus which obtains a captured image of an image-capturing target region; a second identifying unit configured to identify viewpoint conditions concerning a position and an orientation of a virtual viewpoint for a virtual viewpoint image generated based on a plurality of images of the image-capturing target region obtained by a plurality of the image-capturing apparatuses at different positions; and a display control unit configured to allow a display apparatus to display information indicating a degree of match between the identified image-capturing conditions and the identified viewpoint conditions before an image presented to a viewer is switched between the captured image and the virtual viewpoint image.Type: ApplicationFiled: June 15, 2020Publication date: December 24, 2020Inventor: Kazuna Maruyama
-
Publication number: 20200402282Abstract: A technique for combining first and second images respectively depicting first and second subject matter to facilitate virtual presentation. The first image is processed to identify portions or regions of the first subject matter and determine an estimated depth location of each portion or region. A composite image is generated that depicts the second subject matter overlayed, inserted or otherwise combined with the first subject matter. One or more of the portions or regions of the first subject matter are added, removed, enhanced or modified in the composite image in order to generate a realistic appearance of the first subject matter combined with the second subject matter. The composite image is caused to be displayed as a virtual presentation.Type: ApplicationFiled: June 29, 2020Publication date: December 24, 2020Inventors: Alon Kristal, Nir Appleboim, Yael Wiesel, Israel Harry Zimmerman
-
Publication number: 20200402283Abstract: In some embodiments, a method receives a plurality of swatch configurations that each define combinations for lightness values, saturation values, and hue values and receives information associated with a characteristic of an image. A swatch configuration is selected based on the information where the swatch configuration defines a plurality of combinations for lightness values, saturation values, and hue values. The method generates a plurality of colors using the plurality of combinations for lightness values, saturation values, and hue values by varying at least one of the saturation value, the lightness value, and the hue value for the plurality of colors. The plurality of colors are applied to an interface that is displaying the image.Type: ApplicationFiled: September 8, 2020Publication date: December 24, 2020Inventors: Zachary Cava, Hansen Smith
-
Publication number: 20200402284Abstract: In one embodiment, a computing system may access a plurality of first captured images that are captured in a first spectral domain, generate, using a first machine-learning model, a plurality of first domain-transferred images based on the first captured images, wherein the first domain-transferred images are in a second spectral domain, render, based on a first avatar, a plurality of first rendered images comprising views of the first avatar, and update the first machine-learning model based on comparisons between the first domain-transferred images and the first rendered images, wherein the first machine-learning model is configured to translate images in the first spectral domain to the second spectral domain. The system may also generate, using a second machine-learning model, the first avatar based on the first captured images. The first avatar may be rendered using a parametric face model based on a plurality of avatar parameters.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Inventors: Jason Saragih, Shih-En Wei
-
Publication number: 20200402285Abstract: Implementations are directed to providing an edit profile including one or more suggested edits to a digital video, actions including receiving metadata associated with the digital video, the metadata including data representative of one or more of movement and an environment associated with recording of the digital video, processing the metadata to provide a suggested edit profile including at least one set of effects, the at least one set of effects including one or more effects configured to be applied to at least a portion of the digital video, providing a respective graphical representation of individual effect of the one or more effects within an effect interface, and receiving, through the effect interface, a user selection of a set of effects of the suggested edit profile, and in response, storing, in computer-readable memory, an edit profile comprising the set of effects for application to the digital video.Type: ApplicationFiled: September 1, 2020Publication date: December 24, 2020Inventors: Devin McKaskle, Stephen Trey Moore, Ross Chinni
-
Publication number: 20200402286Abstract: An OSS animated display system for an interventional device (40) including an integration of one or more optical shape sensors and one or more interventional tools. The OSS animated display system employs a monitor (121) and a display controller (110) for controlling a real-time display on the monitor (121) of an animation of a spatial positional relationship between the OSS interventional device (40) and an object (50). The display controller (110) derives the animation of the spatial positional relationship between the OSS interventional device (40) and the object (50) from a shape of the optical shape sensor(s).Type: ApplicationFiled: December 29, 2018Publication date: December 24, 2020Inventors: Paul THIENPHRAPA, Neriman Nicoletta KAHYA, Olivier Pierre NEMPONT, Pascal Yves François CATHIER, Molly Lara FLEXMAN, Torre Michelle BYDLON, Raoul FLORENT