Patents Issued in June 18, 2020
-
Publication number: 20200193556Abstract: Graphics layer processing in a multiple operating systems framework is disclosed, including: presenting, at a display, a first composition including a sub-graphics layer object associated with a graphical interface corresponding to an application, wherein the application is executed in a guest subsystem of a system; receiving a content-related compositing request corresponding to a guest server graphics layer object in the guest subsystem; using the guest server graphics layer object to obtain a host server graphics layer object that corresponds to the guest server graphics layer object, wherein the host server graphics layer object is in a host subsystem of the system; obtaining a buffer corresponding to the guest server graphics layer object; and generating a second composition including the sub-graphics layer object, wherein the second composition is to be presented at the display.Type: ApplicationFiled: December 13, 2019Publication date: June 18, 2020Inventors: Decai Jin, Zhuojun Jin
-
Publication number: 20200193557Abstract: In an image provision apparatus (100), a decomposition unit (111) decomposes image data into pieces of unit image data (70). A storing unit (112) stores, in a memory unit (130), image management information (131) including each piece of unit image data (70) of the pieces of unit image data and position information. An acquisition unit (121) accepts a provision request (52). The provision request (52) includes range information representing a range of a partial image in the image. The acquisition unit (121) acquires a unit image data set (711) representing unit images each including at least part of the partial image from the image management information (131), based on the range information and the position information. A generation unit (122) generates the partial image (63) based on the unit image data set (711).Type: ApplicationFiled: October 5, 2017Publication date: June 18, 2020Applicant: MITSUBISHI ELECTRIC CORPORATIONInventors: Satoru TANAKA, Mitsunori KORI
-
Publication number: 20200193558Abstract: An image processing system and an image processing method thereof are provided. Compare the preset writing rate for the first processor to write image data into the memory with a reading rate for the second processor to read the image data written by the first processor from the memory. The image data includes a plurality of image frames. Determine a position for the second processor to perform a next reading operation of the image data from the memory according to a comparison result between the preset writing rate and the reading rate to perform a reading of the image data and to generate the image data of a next image frame when the second processor completes an outputting operation of one image frame and begins to perform a reading of a next image frame.Type: ApplicationFiled: December 11, 2019Publication date: June 18, 2020Applicant: Coretronic CorporationInventor: Pei-Ming Shan
-
Publication number: 20200193559Abstract: An electronic device according to the present invention, includes at least one memory and at least one processor which function as: a reading unit configured to read a candidate image to be posted; and a display controlling unit configured to perform control to display a post creation screen including the candidate image such that in a case where the read candidate image is not an image for VR display, and an image for normal display, a specific display item for performing a hiding process of hiding a part of an image is not displayed on the post creation screen, and in a case where the candidate image is an image for VR display, the specific display item is displayed on the post creation screen.Type: ApplicationFiled: December 10, 2019Publication date: June 18, 2020Inventors: Genjiro Sano, Shin Murakami
-
Publication number: 20200193560Abstract: A system and methods for attaining optimal precision digital image stereoscopic direction and ranging through air and across a refractive boundary separating air from a liquid or plasma using stereo-cameras, and employing a minimum variance sub-pixel registration method for determining precise estimates of the parallax angle between left and right stereo images. The system and methods can also track measurement and estimation variances as they propagate through the system in order to provide a comprehensive precision analysis of all estimated quantities.Type: ApplicationFiled: December 16, 2018Publication date: June 18, 2020Inventor: Sadiki Pili Fleming-Mwanyoha
-
Publication number: 20200193561Abstract: An image processing system receives an image depicting a bundle of boards. The bundle of boards has a front face that is perpendicular to a long axis of boards and the image is captured at an angle relative to the long axis. The image processing system applies a homographic transformation to estimate a frontal view of the front face and identifies a plurality of divisions between rows in the estimate. For each adjacent pair of the plurality of divisions between rows, a plurality of vertical divisions is identified. The image processing system identifies a set of bounding boxes defined by pairs of adjacent divisions between rows and pairs of adjacent vertical divisions. The image processing system may filter and/or merge some bounding boxes to better match the bounding boxes to individual boards. Based on the bounding boxes, the image processing system determines the number of boards in the bundle.Type: ApplicationFiled: February 25, 2020Publication date: June 18, 2020Inventors: Marius Leordeanu, Alina Elena Marcu, Iulia-Adriana Muntianu, Catalin Mutu
-
Publication number: 20200193562Abstract: An optical apparatus captures images of a wide-angle scene with a single camera having a continuous panomorph zoom distortion profile. When combined with a processing unit, the hybrid zoom system creates an output image with constant resolution while allowing continuous adjustment in the magnification and field of view of the image without interpolation like a digital zoom system or without any moving parts like an optical zoom system.Type: ApplicationFiled: February 20, 2020Publication date: June 18, 2020Inventors: Patrice ROULET, Jocelyn PARENT, Xavier DALLAIRE, Pierre KONEN, Pascale NINI
-
Publication number: 20200193563Abstract: An image processing apparatus includes a first control unit, configured to determine first control information and a first interpolation coefficient for a to-be-generated target image to make a correspondence to a source image. The first control information represents data in the source image that are used to generate the target image. The image processing apparatus further includes a first pre-selection unit, configured to select a first input data corresponding to the first control information from the source image; a plurality of buffers, configured to cache the first input data; and a first filter, configured to perform interpolation calculation based on the first interpolation coefficient and the first input data stored in the plurality of buffers to generate the target image. The quantity of the plurality of buffers is greater than or equal to the quantity of taps of the first filter.Type: ApplicationFiled: February 27, 2020Publication date: June 18, 2020Inventors: Yao ZHAO, Kang YANG, Lin CHEN
-
Publication number: 20200193564Abstract: Dynamic image content is generated based on various combinations of image elements associated with an input image unit. In this regard, an input image unit is selected and input into a dynamic content generation engine. The input image unit includes a number of image elements. Different combinations of image elements in the input image are added and/or removed to generate candidate image units. Different colors may be also be assigned to image elements based on a color palette. In this way, permutatively different candidate image units are automatically generated with different combination of elements from the input image unit and possibly different colors. Generation of candidate image units can be based on the application of a combination formula onto the image elements associated with the input image unit. The candidate image units are then displayed for selection and further modification.Type: ApplicationFiled: December 14, 2018Publication date: June 18, 2020Inventors: Fabin Rasheed, Sreedhar Rangathan
-
Publication number: 20200193565Abstract: A method of increasing temporal resolution: a) provides an original video having a given spatial resolution; b) compresses a first frame of said original video using any image compression method; and c) repeatedly compresses a next frame of said original video using the steps of: i. providing a current video comprising the already compressed video frames, said current video having an initial spatial resolution; ii. repeatedly reducing the spatial resolution of said current video and the spatial resolution of said next frame of the original video, to produce a lowest level spatial resolution current video and a lowest level spatial resolution next frame of the original video; and iii. compressing said lowest level spatial resolution next frame of the original video to produce a lowest level compressed next frame.Type: ApplicationFiled: February 24, 2020Publication date: June 18, 2020Inventors: Ilan Bar-On, Oleg Kostenko
-
Publication number: 20200193566Abstract: A method of super-resolution image processing. The method includes inputting first image data representative of a first version of at least part of an image with a first resolution to a machine learning system. The first image data includes pixel intensity data representative of an intensity value of at least one color channel of a pixel of the first version of the at least part of the image, and feature data representative of a value of at least one non-intensity feature associated with the pixel. The first image data is processed using the machine learning system to generate second image data representative of a second version of the at least part of the image with a second resolution greater than the first resolution.Type: ApplicationFiled: December 12, 2018Publication date: June 18, 2020Inventor: Daren CROXFORD
-
Publication number: 20200193567Abstract: Collation device is configured to include a processor, and a storage unit that stores a blurring amount which is set in advance for collation in association with the registered image in advance, in which the processor blurs a face image obtained by imaging an authenticated person with imaging unit with a blurring amount for collation in association with a registered image corresponding to the face image, and uses the image blurred with the blurring amount for collation to perform face authentication. The blurring amount for collation is set such that, a real image blurred by the blurring amount can be authenticated but a photographic image blurred by the blurring amount cannot be authenticated, the real image being a face image obtained by imaging a real face of an authenticated person and the photographic image being a face image obtained by imaging a face photograph of the authenticated person.Type: ApplicationFiled: May 30, 2018Publication date: June 18, 2020Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Megumi YAMAOKA, Takayuki MATSUKAWA
-
Publication number: 20200193568Abstract: A multi-projection system for projecting an image from a plurality of projectors onto a three-dimensional object, includes a master projector and a slave projector. The slave projector estimates an amount of blur in an image projected from the projector itself on a plurality of planes of the three-dimensional object, and provides the estimated amount of blur in the image to the master projector. The master projector estimates an amount of blur in an image projected from the projector itself on a plurality of planes of the three-dimensional object, and determines a region of the image projected from the plurality of projectors for each of the plurality of planes based on the amount of blur in the image estimated by the master projector and the amount of blur in the image estimated by the slave projector.Type: ApplicationFiled: September 19, 2017Publication date: June 18, 2020Inventor: Hisakazu AOYANAGI
-
Publication number: 20200193569Abstract: Methods and apparatus for image processing are provided. The method comprises receiving input of a visible-ray image and a far-infrared-ray image obtained by photographing a same subject, estimating a blur estimation result in the visible-ray image, wherein estimating a blur estimation result comprises calculating a correlation between the visible-ray image and each of a plurality of filter-applied far-infrared ray images in which a different filter is applied to the far-infrared-ray image and selecting the filter for which the calculated correlation is highest, and performing a correction process on the visible-ray image based, at least in part, on the blur estimation result to generate a corrected visible-ray image from which the blur is reduced, wherein generating the corrected visible-ray image comprises applying, to the visible ray image, an inverse filter having an inverse characteristic to a characteristic of the selected filter.Type: ApplicationFiled: August 30, 2018Publication date: June 18, 2020Applicant: Sony CorporationInventors: Suguru Aoki, Ryuta Satoh, Atsushi Ito, Hideki Oyaizu, Takeshi Uemori
-
Publication number: 20200193570Abstract: Image processing methods and apparatus are described. The image processing method comprises receiving input of a visible-ray image and an infrared-ray image obtained by photographing a same subject, estimating, based on the visible-ray image, the infrared-ray image and motion information, a blur estimate associated with the visible-ray image, and generating, based on the estimated blur estimate, a corrected visible-ray image.Type: ApplicationFiled: August 30, 2018Publication date: June 18, 2020Applicant: Sony CorporationInventors: Suguru Aoki, Ryuta Satoh, Atsushi Ito, Hideki Oyaizu, Takeshi Uemori
-
Publication number: 20200193571Abstract: An image processing apparatus included in a vehicle comprises: a division section that is configured to divide into a plurality of areas, a captured image sequentially captured by an imaging device that captures images around the vehicle; an importance set section that is configured to set an importance level for each of the areas; and a compression section that is configured to compress the captured image for each of the areas.Type: ApplicationFiled: February 25, 2020Publication date: June 18, 2020Inventor: Haruhiko SOGABE
-
Publication number: 20200193572Abstract: There are many instances where a standard dynamic range (“SDR”) overlay is displayed over high dynamic range (“HDR”) content on HDR displays. Because the overlay is SDR, the maximum brightness of the overlay is much lower than the maximum brightness of the HDR content, which can lead to the SDR elements being obscured if those elements have at least some transparency. The present disclosure provides techniques including modifying the luminance of either or both of the HDR and SDR content when an SDR layer with some transparency is displayed over HDR content. A variety of techniques are provided. In one example, a fixed adjustment is applied to pixels of one or both of the SDR layer and the HDR layer. The fixed adjustment comprises decreasing the luminance of the HDR layer and/or increasing the luminance of the SDR layer. In another example, a variable adjustment is applied.Type: ApplicationFiled: December 13, 2018Publication date: June 18, 2020Applicant: ATI Technologies ULCInventors: Jie Zhou, David I. J. Glen
-
Publication number: 20200193573Abstract: The present principles relate to a method and device for gamut mapping from a first color gamut towards a second color gamut. The method comprises, in a plane of constant hue, obtaining a target lightness for a color on the boundary of first gamut with maximum chroma, called first cusp color; and lightness mapping of the color from the first color gamut towards the second color gamut wherein the lightness mapped color is calculated from a parabolic function applied to the color, the parabolic function mapping the first cusp color to a color having the target lightness. According to a particular characteristic, a preserved chroma is also obtained; and in case the chroma of the color is lower than or equal to the preserved chroma, the lightness mapped color is the color, and in case the chroma of the color is higher than the preserved chroma, the lightness mapped color is calculated from the parabolic function applied to the color.Type: ApplicationFiled: April 17, 2018Publication date: June 18, 2020Inventors: Cedric THEBAULT, Marie-Jean COLAITIS, Angelo MAZZANTE
-
Publication number: 20200193574Abstract: There is provided a method, a device (104), and a system (100) for enhancing changes in an image (103a) of an image sequence (103) captured by a thermal camera (102). An image (103a) which is part of the image sequence (103) is received (S02) and pixels (408) in the image that have changed in relation to another image (103b) in the sequence are identified (S04). Based on the intensity values of the identified pixels, a function (212, 212a, 212b, 212c, 212d, 212e) which is used to redistribute intensity values of changed as well as non-changed pixels in the image is determined (S06). The function has a maximum (601) for a first intensity value (602) in a range (514) of the intensity values of the identified pixels, and decays with increasing distance from the first intensity value.Type: ApplicationFiled: November 19, 2019Publication date: June 18, 2020Applicant: Axis ABInventor: Thomas WINZELL
-
Publication number: 20200193575Abstract: The information processing apparatus has an image correction unit. The image correction unit has a correction parameter determination unit configured to determine a correction parameter for correcting an image capturing characteristic of image data corresponding to a target viewpoint among a plurality of pieces of image data acquired by capturing an object from a plurality of viewpoints based on an image capturing characteristic of image data of another viewpoint different from the target viewpoint and a pixel value correction unit configured to correct a pixel value of the image data corresponding to the target viewpoint based on the correction parameter. Moreover, the information processing apparatus further has an image composition unit configured to generate composed image data based on the image data whose pixel value has been corrected.Type: ApplicationFiled: December 4, 2019Publication date: June 18, 2020Inventor: Tatsuro Koizumi
-
Publication number: 20200193576Abstract: A storage medium stores correction data for obtaining a correction amount for correcting image data, obtained from an image formed by a lens apparatus, with respect to a distribution of a light amount in the image, wherein the correction data includes a coefficient of an n-th order polynomial (where n is a non-negative integer) with respect to an image height h, the coefficient corresponding to a state of the lens apparatus. The coefficient satisfies a first inequality ?0.15?dD?(h)?dDlens(h)?1.98, where dDlens(h) represents a change amount of the light amount at the image height h per an increase amount dh of the image height h, and dD?(h) represents a change amount of an inverse of a value of the n-th order polynomial at the image height h per the increase amount dh.Type: ApplicationFiled: December 9, 2019Publication date: June 18, 2020Inventors: Tomoya Yamada, Kazufumi Goto
-
Publication number: 20200193577Abstract: A method for implementing image enhancement includes: performing filtering processing on a to-be-processed image to obtain an image subjected to the filtering processing; determining similarity degrees between pixel points in the to-be-processed image and a target region of a target object in the to-be-processed image; and fusing the similarity degrees, the to-be-processed image and the image subjected to the filtering processing, so that the higher a similarity degree between a pixel point and the target object in the to-be-processed image, the stronger a filtering effect of the pixel point, and the lower a similarity degree between the pixel point and the target object in the to-be-processed image, the weaker a filtering effect of the pixel point.Type: ApplicationFiled: February 22, 2020Publication date: June 18, 2020Applicant: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Mingyang HUANG, Jianping SHI
-
Publication number: 20200193578Abstract: A method for image processing, which comprises the following steps: Generating a first histogram from a first image; Calculating a first parameter profile from the first image indicative of the quality of the first image; Adjusting the first parameter profile to generate a second parameter profile; Using the second parameter profile to generate a statistical distribution via a statistical distribution generator, wherein the statistical distribution is characterized by at least three parameters; Using the statistical distribution to perform a histogram specification to the first histogram of the first image to generate a second histogram; Generating a second image based on the first image and the second histogram.Type: ApplicationFiled: February 21, 2020Publication date: June 18, 2020Applicant: CHONGQING UNIVERSITY OF POSTS AND TELECOMMUNICATIONSInventors: Guoyin WANG, Tong ZHAO, Bin XIAO
-
Publication number: 20200193579Abstract: An image processing device includes: a setting unit configured to set an area, as a first area, in which at least one delimiting line for delimiting a parking space is detected in a first image of plural images continuously captured while moving; and a prediction unit configured to predict, based on the first area, a second area in which the at least one delimiting line is to be detected in at least one second image of the plural images, the at least one second image being captured later in time than the first image.Type: ApplicationFiled: September 18, 2019Publication date: June 18, 2020Applicant: DENSO TEN LimitedInventors: Yasutaka OKADA, Hiroaki SANO, Tetsuo YAMAMOTO, Atsushi YOSHIHARA, Jun KANETAKE, Ryo YOSHIMURA, Tomoki SHIDORI
-
Publication number: 20200193580Abstract: Generally described, one or more aspects of the present application correspond to systems and techniques for spectral imaging using a multi-aperture system with curved multi-bandpass filters positioned over each aperture. The present disclosure further relates to techniques for implementing spectral unmixing and image registration to generate a spectral datacube using image information received from such imaging systems. Aspects of the present disclosure relate to using such a datacube to analyze the imaged object, for example to analyze tissue in a clinical setting, perform biometric recognition, or perform materials analysis.Type: ApplicationFiled: January 9, 2020Publication date: June 18, 2020Inventors: Brian McCall, Wensheng Fan, Jason Dwight, Zhicun Gao, Jeffrey E. Thatcher, John Michael DiMaio
-
Publication number: 20200193581Abstract: The technology described in this document can be embodied in a method that includes receiving during a first time period, information from a first sensor representing a target illuminated by a first illumination source radiating in a first wavelength range, and information from a second sensor representing the target illuminated by a second illumination source radiating in a second wavelength range. The method also includes receiving during a second time period, information from the first sensor representing the target illuminated by the second illumination source radiating in the first wavelength range, and information from the second sensor representing reflected light received from the target illuminated by the first illumination source radiating in the second wavelength range. The method also includes generating a representation of the image in which effects due to the first and second illumination sources are enhanced over effects due to ambient light sources.Type: ApplicationFiled: February 19, 2020Publication date: June 18, 2020Applicant: Alibaba Group Holding LimitedInventor: Reza R. Derakhshani
-
Publication number: 20200193582Abstract: An information processing apparatus for supporting a user's task for identifying a defect in an object based on a target image that is a photographed image of the object, includes: a selecting unit that selects, based on an user's input, one from one or more reference images to which is referred by the user for identifying a defect in the object; a display unit that comparably displays the target image and the selected reference image on a certain display device; a specifying unit that receives an user's operation for specifying a defect in the target image displayed by the display unit; and a generating unit that generates a new reference image based on a partial area, of the target image, including the specified defect. A new reference image generated by the generating unit is added to the one or more reference images selectable by the selecting unit.Type: ApplicationFiled: February 24, 2020Publication date: June 18, 2020Inventor: Kohei Iwabuchi
-
Publication number: 20200193583Abstract: A system and method of processing an image is provided in which an input image output by an imaging sensor is received. For each location of a plurality of locations of a reference point of a moving window in the input image, a first image quality metric is determined as a function of quality of first image content included in a region covered by the moving window, wherein the window is sized to include at least a significant portion of a target of interest. An enhancement process is applied to the input image and generates a resulting enhanced image that is spatially registered with the input image. For each location of the plurality of locations of the reference point of the moving window in the enhanced image, a second image quality metric is determined as a function of quality of second image content included in the region covered by the moving window.Type: ApplicationFiled: December 12, 2018Publication date: June 18, 2020Applicant: Goodrich CorporationInventor: Haijun Hu
-
Publication number: 20200193584Abstract: An apparatus and method for determining image sharpness is provided. According to one embodiment, an apparatus includes a weight device configured to determine a weight map of a reference image; an image sharpening device configured to sharpen the reference image using at least one sharpening method; an edge activity map device connected to the image sharpening device and configured to determine a first edge activity map ?(x, y) for each sharpened image of reference image by the at least one sharpening method; and an edge sharpness metric device connected to the weight device and the first edge activity map device and configured to determine an edge sharpness metric (ESM) for each sharpened image of the reference image by the at least one sharpening method based on the weight map and the edge activity map for each sharpened image of the reference image by the at least one sharpening method.Type: ApplicationFiled: July 17, 2019Publication date: June 18, 2020Inventors: Seongjun PARK, Shuangquan WANG, Jungwon LEE
-
Publication number: 20200193585Abstract: An information processing apparatus determines, for an image to be processed including a first region having a first image quality and a second region other than the first region having a second image quality lower than the first image quality, whether or not a difference in image quality between the first image quality and the second image quality is equal to or larger than a predetermined value, converts, in a case where the determination unit determines that the difference between the first image quality and the second image quality is equal to or larger than the predetermined value, the image of the second region into an image having a third image quality higher than the second image quality, and generates a combined image by using the post-conversion image having the third image quality and the image of the first region.Type: ApplicationFiled: December 6, 2019Publication date: June 18, 2020Inventor: Hideyuki Ikegami
-
Publication number: 20200193586Abstract: A method and system for propelling and controlling displacement of a microrobot in a space having a wall, includes the steps of: forming the microrobot with a body containing a magnetic field-of-force responsive material, wherein, in response to a magnetic field of force, a force is applied to the material in a direction of the magnetic field of force; positioning the microrobot in the space for displacement in that space; and generating the magnetic field of force with a predetermined gradient and applying the magnetic field of force to the microrobot propelling the microrobot through the space in a direction of a field of force. Then, a sequence of field generating steps are executed, wherein each step includes calculating the direction, amplitude and spatial variation of the net field of force to control displacement of the microrobot in the space and against the wall from one equilibrium point to another.Type: ApplicationFiled: December 16, 2019Publication date: June 18, 2020Applicant: ETH ZÜRICHInventors: Christophe CHAUTEMS, Bradley James NELSON
-
Publication number: 20200193587Abstract: An inline vision-based system used for the inspection and processing of food material and associated imaging methods are disclosed. The system includes a conveyor belt, a transparent plate, and an imaging system, wherein the imaging system includes a light source and at least one camera. The imaging system produces image data from multiple views of light passing through an object on the transparent plate and captured by the camera. The image data corresponds to one of transmittance, interactance, or reflectance image data and is transmitted to a processor. The processor processes the data using machine learning to generate a three dimensional model of the geometry of a portion of material internal to the object so as to determine boundaries of the portion relative to the surrounding material.Type: ApplicationFiled: December 18, 2018Publication date: June 18, 2020Inventor: Stefan Mairhofer
-
Publication number: 20200193588Abstract: One or more semiconductor wafers or portions thereof are scanned using a primary optical mode, to identify defects. A plurality of the identified defects, including defects of a first class and defects of a second class, are selected and reviewed using an electron microscope. Based on this review, respective defects of the plurality are classified as defects of either the first class or the second class. The plurality of the identified defects is imaged using a plurality of secondary optical modes. One or more of the secondary optical modes are selected for use in conjunction with the primary optical mode, based on results of the scanning using the primary optical mode and the imaging using the plurality of secondary optical modes. Production semiconductor wafers are scanned for defects using the primary optical mode and the one or more selected secondary optical modes.Type: ApplicationFiled: May 8, 2019Publication date: June 18, 2020Inventors: Bjorn Brauer, Richard Wallingford, Kedar Grama, Hucheng Lee, Sangbong Park
-
Publication number: 20200193589Abstract: A computer-implemented method for generating an improved map of field anomalies using digital images and machine learning models is disclosed.Type: ApplicationFiled: December 9, 2019Publication date: June 18, 2020Inventors: BOYAN PESHLOV, WEILIN WANG
-
Publication number: 20200193590Abstract: A framework for quantitative evaluation of time-varying data. In accordance with one aspect, the framework delineates a volume of interest in a four-dimensional (4D) Digital Subtraction Angiography (DSA) dataset (204). The framework then extracts a centerline of the volume of interest (206). In response to receiving one or more user-selected points along the centerline (208), the framework determines at least one blood dynamics measure associated with the one or more user-selected points (210), and generates a visualization based on the blood dynamics measure (212).Type: ApplicationFiled: February 23, 2018Publication date: June 18, 2020Inventors: Sebastian Schafer, Markus Kowarschik, Sonja Gehrisch, Kevin Royalty, Christopher Rohkohl
-
Publication number: 20200193591Abstract: Disclosed are systems and methods for generating data sets for training deep learning networks for key point annotations and measurements extraction from photos taken using a mobile device camera. The method includes the steps of receiving a 3D scan model of a 3D object or subject captured from a 3D scanner and a 2D photograph of the same 3D object or subject at a virtual workspace. The 3D scan model is rigged with one or more key points. A superimposed image of a pose-adjusted and aligned 3D scan model superimposed over the 2D photograph is captured by a virtual camera in the virtual workspace. Training data for a key point annotation DLN is generated by repeating the steps for a plurality of objects belonging to a plurality of object categories. The key point annotation DLN learns from the training data to produce key point annotations of objects from 2D photographs captured using any mobile device camera.Type: ApplicationFiled: November 26, 2019Publication date: June 18, 2020Inventors: Kyohei Kamiyama, Chong Jin Koh
-
Publication number: 20200193592Abstract: The present disclosure provides techniques and apparatus for capturing an image of a person's retina fundus, identifying the person, accessing various electronic records (including health records) or accounts or devices associated with the person, determining the person's predisposition to certain diseases, and/or diagnosing health issues of the person. Some embodiments provide imaging apparatus having one or more imaging devices for capturing one or more images of a person's eye(s). Imaging apparatus described herein may include electronics for analyzing and/or exchanging captured image and/or health data with other devices. In accordance with various embodiments, imaging apparatus described herein may be alternatively or additionally configured for biometric identification and/or health status determination techniques, as described herein.Type: ApplicationFiled: December 12, 2019Publication date: June 18, 2020Applicant: Tesseract Health, Inc.Inventors: Maurizio Arienzo, Owen Kaye-Kauderer, Tyler S. Ralston, Benjamin Rosenbluth, Jonathan M. Rothberg, Lawrence C. West, Jacobus Coumans, Christopher Thomas McNulty
-
Publication number: 20200193593Abstract: A system is disclosed for remotely determining patient compliance with an orthodontic device. This system includes a handheld portable computing device having a camera, and the handheld portable computing device is configured for communication via the Internet. A patient compliance application is executed by the handheld portable computing device, and an image analysis module is associated with the patient compliance application. The image analysis module receives an image from the camera, and the image analysis module analyzes the image to determine a presence status of the removable orthodontic device. The patient compliance application is further configured to communicate the presence status to an orthodontic provider at a remote location relative the user.Type: ApplicationFiled: December 12, 2019Publication date: June 18, 2020Inventors: Stephen Powell, Joseph T. Acklin
-
Publication number: 20200193594Abstract: Systems and methods for identifying and assessing lymph nodes are provided. Medical image data (e.g., one or more computed tomography images) of a patient is received and anatomical landmarks in the medical image data are detected. Anatomical objects are segmented from the medical image data based on the one or more detected anatomical landmarks. Lymph nodes are identified in the medical image data based on the one or more detected anatomical landmarks and the one or more segmented anatomical objects. The identified lymph nodes may be assessed by segmenting the identified lymph nodes from the medical image data and quantifying the segmented lymph nodes. The identified lymph nodes and/or the assessment of the identified lymph nodes are output.Type: ApplicationFiled: December 13, 2019Publication date: June 18, 2020Inventors: Bogdan Georgescu, Elijah D. Bolluyt, Alexandra Comaniciu, Sasa Grbic
-
Publication number: 20200193595Abstract: A medical information processing apparatus according to an embodiment includes: a memory storing therein a trained model provided with a function to specify, on the basis of input information including a medical image and medical examination information related to the medical image, at least one selected from between a relevant image relevant to the medical image and an image processing process performed on the basis of the medical image; and processing circuitry configured to give an evaluation to at least one selected from between the relevant image and the image processing process specified by the trained model.Type: ApplicationFiled: December 17, 2019Publication date: June 18, 2020Applicant: CANON MEDICAL SYSTEMS CORPORATIONInventors: Taisuke IWAMURA, Keita MITSUMORI
-
Publication number: 20200193596Abstract: A system and method for determining types of objects within a bodily fluid sample includes a sample holder holding a bodily fluid sample, an image capture device generating a plurality of images of the sample, and a sample positioner positioning the sample holder. An image capture device generates a plurality of images of the sample at a plurality of positions. A trained image classifier the plurality of images at a trained image classifier to identify a type of objects in the bodily fluid sample. An analyzer, that in response to classifying, displays an indicator on a display for indicating the type of objects is present within the bodily fluid sample.Type: ApplicationFiled: December 18, 2019Publication date: June 18, 2020Applicant: Hemotech Cognition, LLCInventor: Theodore F. BAYER
-
Publication number: 20200193597Abstract: Machine learning systems and methods are disclosed for prediction of wound healing, such as for diabetic foot ulcers or other wounds, and for assessment implementations such as segmentation of images into wound regions and non-wound regions. Systems for assessing or predicting wound healing can include a light detection element configured to collect light of at least a first wavelength reflected from a tissue region including a wound, and one or more processors configured to generate an image based on a signal from the light detection element having pixels depicting the tissue region, determine reflectance intensity values for at least a subset of the pixels, determine one or more quantitative features of the subset of the plurality of pixels based on the reflectance intensity values, and generate a predicted or assessed healing parameter associated with the wound over a predetermined time interval.Type: ApplicationFiled: January 9, 2020Publication date: June 18, 2020Inventors: Wensheng Fan, John Michael DiMaio, Jeffrey E. Thatcher, Peiran Quan, Faliu Yi, Kevin Plant, Ronald Baxter, Brian McCall, Zhicun Gao, Jason Dwight
-
Publication number: 20200193598Abstract: A dynamic analysis system includes a diagnostic console which calculates at least one index value representing variation in a target portion of a human body from at least one dynamic image acquired by performing radiographic imaging to a subject containing the target portion, and evaluates flexibility of the target portion based on the calculated index value.Type: ApplicationFiled: February 20, 2020Publication date: June 18, 2020Inventors: Sho NOJI, Koichi FUJIWARA, Hitoshi FUTAMURA, Akinori TSUNOMORI
-
Publication number: 20200193599Abstract: A system for computer-aided triage can include a router, a remote computing system, and a client application. A method for computer-aided triage can include determining a parameter associated with a data packet, determining a treatment option based on the parameter, and transmitting information to a device associated with a second point of care.Type: ApplicationFiled: February 26, 2020Publication date: June 18, 2020Inventors: Christopher Mansi, David Golan
-
Publication number: 20200193600Abstract: A set of pre-operative images may be captured of an anatomical structure using an endoscopic camera. Each captured image is associated with a position and orientation of the camera at the moment of capture using image guided surgery (IGS) techniques. This image data and position data may be used to create a navigation map of captured images. During a surgical procedure on the anatomical structure, a real-time endoscopic view may be captured and displayed to a surgeon. The IGS navigation system may determine the position and orientation of the real-time image; and select an appropriate pre-operative image from the navigation map to display to the surgeon in addition to the real-time image.Type: ApplicationFiled: October 28, 2019Publication date: June 18, 2020Inventors: Ehsan Shameli, Jetmir Palushi, Fatemeh Akbarian, Yehuda Algawi, Assaf Govari, Babak Ebrahimi
-
Publication number: 20200193601Abstract: An image capturing apparatus sets reference values for a plurality of evaluation indexes and captures images of affected regions for the evaluation indexes, based on user's operation. An image processing apparatus analyzes the captured images and determines the affected region(s) for the evaluation index(es) exceeding the associated reference value(s) set by the user. The image capturing apparatus causes a display unit to highlight the affected region(s) for the evaluation index(es) exceeding the associated reference value(s) and superposes (displays) the affected region(s) on the image of an affected region.Type: ApplicationFiled: December 10, 2019Publication date: June 18, 2020Inventor: Takashi Sugimoto
-
Publication number: 20200193602Abstract: A diagnosis support system having a processor configured to acquire medical images; detect a medicine and/or equipment used when the medical images are captured, from the medical images by image recognition; detect a region of interest from the medical images by image recognition; assign, to the medical image from which the medicine and/or equipment is detected, first detection information indicating the detected medicine and/or equipment; and assign, to the medical image from which the region of interest is detected, second detection information indicating the detected region of interest, display, on a display device, the medical images in a list in a display form according to the first detection information and the second detection information.Type: ApplicationFiled: February 19, 2020Publication date: June 18, 2020Applicant: FUJIFILM CorporationInventor: Shumpei KAMON
-
Publication number: 20200193603Abstract: Systems and methods for automated segmentation of anatomical structures (e.g., heart). Convolutional neural networks (CNNs) may be employed to autonomously segment parts of an anatomical structure represented by image data, such as 3D MRI data. The CNN utilizes two paths, a contracting path and an expanding path. In at least some implementations, the expanding path includes fewer convolution operations than the contracting path. Systems and methods also autonomously calculate an image intensity threshold that differentiates blood from papillary and trabeculae muscles in the interior of an endocardium contour, and autonomously apply the image intensity threshold to define a contour or mask that describes the boundary of the papillary and trabeculae muscles. Systems and methods also calculate contours or masks delineating the endocardium and epicardium using the trained CNN model, and anatomically localize pathologies or functional characteristics of the myocardial muscle using the calculated contours or masks.Type: ApplicationFiled: February 25, 2020Publication date: June 18, 2020Inventors: Daniel Irving Golden, Matthieu Le, Jesse Lieman-Sifry, Hok Kan Lau
-
Publication number: 20200193604Abstract: A segmentation model is trained with an image reconstruction model that shares an encoding. During application of the segmentation model, the segmentation model may use the encoding and network layers trained for the segmentation without the image reconstruction model. The image reconstruction model may include a probabilistic representation of the image that represents the image based on a probability distribution. When training the model, the encoding layers of the model use a loss function including an error term from the segmentation model and from the autoencoder model. The image reconstruction model thus regularizes the encoding layers and improves modeling results and prevents overfitting, particularly for small training sizes.Type: ApplicationFiled: December 17, 2018Publication date: June 18, 2020Inventor: Andriy Myronenko
-
Publication number: 20200193605Abstract: A method for curvilinear object segmentation includes receiving at least one input image comprising curvilinear features. The at least one input image is mapped to segmentation maps of the curvilinear features using a deep network having a representation module and a task module. The mapping includes transforming the input image in the representation module using learnable filters configured to balance recognition of curvilinear geometry with reduction of training error. The segmentation maps are produced using the transformed input image in the task module.Type: ApplicationFiled: December 18, 2018Publication date: June 18, 2020Inventors: Raja Bala, Venkateswararao Cherukuri, Vijay Kumar B G