Patents Issued in August 17, 2017
-
Publication number: 20170236245Abstract: A system, method, and computer program product are provided for remote rendering of computer graphics. The system includes a graphics application program resident at a remote server. The graphics application is invoked by a user or process located at a client. The invoked graphics application proceeds to issue graphics instructions. The graphics instructions are received by a remote rendering control system. Given that the client and server differ with respect to graphics context and image processing capability, the remote rendering control system modifies the graphics instructions in order to accommodate these differences. The modified graphics instructions are sent to graphics rendering resources, which produce one or more rendered images. Data representing the rendered images is written to one or more frame buffers. The remote rendering control system then reads this image data from the frame buffers. The image data is transmitted to the client for display or processing.Type: ApplicationFiled: February 24, 2017Publication date: August 17, 2017Inventor: Phillip C. Keslin
-
Publication number: 20170236246Abstract: A mechanism is described for facilitating parallel scheduling of multiple commands on computing devices. A method of embodiments, as described herein, includes detecting a command of a plurality of commands to be processed at a graphics processing unit (GPU), and acquiring one or more resources of a plurality of resources to process the command. The plurality of resources may include other resources being used to process other commands of the plurality of commands. The method may further include facilitating processing of the command using the one or more resources, wherein the command is processed in parallel with processing of the other commands using the other resources.Type: ApplicationFiled: September 12, 2014Publication date: August 17, 2017Applicant: INTEL CORPORATIONInventor: MICHAL ANDRZEJ MROZEK
-
Publication number: 20170236247Abstract: A mechanism is described for facilitating ray compression for efficient graphics data processing at computing devices. A method of embodiments, as described herein, includes forwarding a set of rays to a ray compression unit hosted by a graphics processor at a computing device, and facilitating the ray compression unit to compress the set of rays, wherein the set of rays are compressed into a compressed representation.Type: ApplicationFiled: February 17, 2016Publication date: August 17, 2017Applicant: INTEL CORPORATIONInventor: TOMAS G. AKENINE-MOLLER
-
Publication number: 20170236248Abstract: A processor identifies a curved structure in three-dimensional medical image data. The processor selects a plane in the three-dimensional medical image data based at least in part on the identified curved structure. The processor defines a curved image slice in the selected plane based at least in part on the identified curved structure. The curved image slice may be defined by drawing a pair of curved lines on opposite sides of the identified curved structure in the selected plane. The distance between the pair of curved lines may define a thickness of the curved image slice. The processor generates a rendered image of the defined curved image slice. The rendered image may be generally perpendicular to the selected plane. The rendered image and/or the selected plane having the pair of curved lines superimposed on opposite sides of the identified curved structure may be presented at a display system.Type: ApplicationFiled: February 16, 2016Publication date: August 17, 2017Inventors: Klaus Pintoffl, Christian Fritz Perrey, Jos Stas
-
Publication number: 20170236249Abstract: An image distortion transformation method for transforming an original image by an imager having an original distortion profile to a transformed image optimized for a distortion processing unit includes inputting the original image from the imager into the transformation unit, inputting an original image distortion profile into the original distortion profile memory of the transformation unit, and inputting a target distortion profile into the target distortion profile memory of the transformation unit. The target distortion profile is different from the original distortion profile. The method further includes transforming the original image into a transformed image by transforming the distortion profile of the original image from the original image distortion profile to the target image distortion profile, and outputting the transformed image from the transformation unit.Type: ApplicationFiled: February 16, 2017Publication date: August 17, 2017Inventors: Patrice ROULET, Xiaojun DU, Jocelyn PARENT, Pierre KONEN, Simon THIBAULT, Pascale NINI, Valentin BATAILLE, Jhinseok LEE, Hu ZHANG
-
Publication number: 20170236250Abstract: In embodiments of facial feature liquifying using face mesh, an image processing application is implemented to modify facial features of a face in an image from a combination of deformation fields. The image processing application can generate a face mesh that includes landmark points, and then construct the deformation fields on the face mesh, where the deformation fields are defined by warpable elements formed from the landmark points. The image processing application can also combine the deformation fields. The image processing application can also receive an input to initiate modifying one or more of the facial features of the face in the image using the combined deformation fields.Type: ApplicationFiled: May 1, 2017Publication date: August 17, 2017Applicant: Adobe Systems IncorporatedInventors: Byungmoon Kim, Daichi Ito, Gahye Park
-
Publication number: 20170236251Abstract: A computer system for continuously panning oblique images is disclosed. More particularly, the computer system uses a methodology whereby separate oblique images are presented in a manner that allows a user to maintain an understanding of the relationship of specific features between different oblique images when panning.Type: ApplicationFiled: December 22, 2016Publication date: August 17, 2017Inventors: Stephen L. Schultz, Chris Schnaufer, Frank Giuffrida
-
Publication number: 20170236252Abstract: Techniques are described for generating and rendering video content based on area of interest (also referred to as foveated rendering) to allow 360 video or virtual reality to be rendered with relatively high pixel resolution even on hardware not specifically designed to render at such high pixel resolution. Processing circuitry may be configured to keep the pixel resolution within a first portion of an image of one view at the relatively high pixel resolution, but reduce the pixel resolution through the remaining portions of the image of the view based on an eccentricity map and/or user eye placement. A device may receive the images of these views and process the images to generate viewable content (e.g., perform stereoscopic rendering or interpolation between views). Processing circuitry may also make use of future frames within a video stream and base predictions on those future frames.Type: ApplicationFiled: September 20, 2016Publication date: August 17, 2017Inventors: Phi Hung Le Nguyen, Ning Bi
-
Publication number: 20170236253Abstract: Systems, methods, and computer-readable media acquire an image captured with a mobile device. Motion sensor data of the mobile device at or near a time when the image was captured is acquired. An angle of rotation is computed based on the motion sensor data, and the image is transformed based on the angle of rotation. In another aspect, a user interface enables user control over image transformation. The user interface enables user control over rotating an image on a display at two or more granularities. A point of rotation may be user-defined. Rotated images may be scaled to fit within a viewing frame for displaying the transformed image.Type: ApplicationFiled: May 4, 2017Publication date: August 17, 2017Inventors: Alex Restrepo, Kevin Systrom
-
Publication number: 20170236254Abstract: An image processing apparatus including an image transmission map estimator, a transmission map optimizer, and an image rebuilder is provided. The image transmission map estimator receives an input image and estimates transmission rate of the input image to generate an estimated transmission map. The transmission map optimizer receives the estimated transmission map and operates smooth operations with different strength on the estimated transmission map to respectively generate a plurality of smoothed transmission maps. The transmission map optimizer generates an optimized transmission map according to the estimated and smoothed transmission maps. The image rebuilder receives the optimized transmission map and generates an output image by rebuilding the input image according to the optimized transmission map.Type: ApplicationFiled: April 25, 2016Publication date: August 17, 2017Applicant: Novatek Microelectronics Corp.Inventor: Hsiu-Wei Ho
-
Publication number: 20170236255Abstract: Near-eye display systems in accordance with embodiments of the invention enable accommodation-invariant display control. One embodiment includes a near-eye display; a processor; a memory containing a target image and an accommodation-invariant display application; where the processor is configured by the accommodation-invariant display application to calculate an impulse response of the near-eye display; calculate a compensation image by generating a deconvolved color channel of the target image using a ratio of the target image and the impulse response, where the compensation image is a representation of the target image that remains in focus at a plurality of distances from the near-eye display; and display the compensation image on the near-eye display.Type: ApplicationFiled: December 16, 2016Publication date: August 17, 2017Applicant: The Board of Trustees of the Leland Stanford Junior UniversityInventors: Gordon Wetzstein, Robert Konrad
-
Publication number: 20170236256Abstract: A computer-implemented method and system of imaging processing for correcting an image may include determining an initial score of the image in response to receiving an image captured by an electronic device. Input parameters for performing deconvolution of the image may be initiated. The image may be deblurred utilizing deconvolution as a function of the input parameters. A score of the image may be determined. A determination as to whether the score of the image meets at least one criteria indicative of whether an optimal image has been determined may be made. If an optimal image has been determined, the image and image parameters utilized to determine the optimal image may be output. Otherwise, the input parameters may be adjusted. The deblurring, determining, determining, and adjusting may be repeated until the optimal image has been determined.Type: ApplicationFiled: February 10, 2017Publication date: August 17, 2017Inventors: Stephane Bucaille, Mark Davis
-
Publication number: 20170236257Abstract: In one embodiment, a video processing system 300 may filter a video data set to correct skew and wobble using a central processing unit 220 and a graphical processing unit 230. The video processing system 300 may apply a rolling shutter effect correction filter to an initial version of a video data set. The video processing system 300 may simultaneously apply a video stabilization filter to the initial version to produce a final version video data set.Type: ApplicationFiled: October 3, 2016Publication date: August 17, 2017Applicant: Microsoft Technology Licensing, LLCInventors: Yongjun Wu, Matthew Wozniak, Simon Baker, Catalin Alexandru Negrila, Venkata S. K. Kamal Lanka, Kevin Chin, Brian Kohlwey
-
Publication number: 20170236258Abstract: This invention provides a system and method for finding multiple line features in an image. Two related steps are used to identify line features. First, the process computes x and y-components of the gradient field at each image location, projects the gradient field over a plurality subregions, and detects a plurality of gradient extrema, yielding a plurality of edge points with position and gradient. Next, the process iteratively chooses two edge points, fits a model line to them, and if edge point gradients are consistent with the model, computes the full set of inlier points whose position and gradient are consistent with that model. The candidate line with greatest inlier count is retained and the set of remaining outlier points is derived. The process then repeatedly applies the line fitting operation on this and subsequent outlier sets to find a plurality of line results. The process can be exhaustive RANSAC-based.Type: ApplicationFiled: October 31, 2016Publication date: August 17, 2017Inventors: Yu Feng Hsu, Lowell D. Jacobson, David Y. Li
-
Publication number: 20170236259Abstract: A color fringe is corrected by detecting a transition region that includes pixels adjacent in a linear direction. A color difference distribution in the transition region is modeled by a logistic function. Pixel color values in the transition region are corrected using the logistic function to maximize a correlation between a correction color and a reference color with respect to the transition region. Color distortion such as color fringes is corrected without corrupting the original colors of the image by modeling the color difference by the logistic function while maximizing the correlation using information of the undistorted region. A calculation cost is reduced by reducing the number of the parameters required to optimize the logistic function.Type: ApplicationFiled: October 7, 2016Publication date: August 17, 2017Applicant: SOGANG UNIVERSITY RESEARCH FOUNDATIONInventors: RAE-HONG PARK, DONG-WON JANG
-
Publication number: 20170236260Abstract: A user equipment includes a modem receives a compressed bitstream and metadata. The UE also includes a decoder that decodes the compressed bitstream to generate an HDR image, an inertial measurement unit that determines viewpoint information based on an orientation of the UE, and a graphics processing unit (GPU). The GPU maps the HDR image onto a surface and renders a portion of the HDR image based on the metadata and the viewpoint information. A display displays the portion of the HDR image.Type: ApplicationFiled: September 1, 2016Publication date: August 17, 2017Inventors: Madhukar Budagavi, Hossein Najaf-Zadeh, Esmaeil Faramarzi, Ankur Saxena
-
Publication number: 20170236261Abstract: A wear measurement system for a component is disclosed. The wear measurement system may have an imaging device configured to obtain a plurality of two-dimensional images of the component. The wear measurement system may also have a controller. The controller may be configured to generate a three-dimensional point cloud representing the component based on the two-dimensional images. The controller may also be configured to select at least two reference points appearing in each of a subset of images selected from the two-dimensional images. Further, the controller may be configured to determine locations of the two reference points in the three-dimensional point cloud. The controller may also be configured to determine an image distance between the locations. In addition, the controller may be configured to determine an amount of wear based on the image distance.Type: ApplicationFiled: February 11, 2016Publication date: August 17, 2017Applicant: Caterpillar Inc.Inventors: Nolan S. FINCH, Yue WANG, Timothy J. BENNETT, Daniel A. SMYTH, Kook In HAN
-
Publication number: 20170236262Abstract: The system behavior is evaluated by checking the position and the orientation of a target processed by a processing device in accordance with a control instruction. A simulator estimates a behavior of a system including a processing device that processes a target. The simulator includes a measurement unit that performs image measurement of an input image including at least a part of a target as a subject of the image, an execution unit that executes a control operation for generating a control instruction directed to the processing device based on a measurement result obtained by the measurement unit, and a reproduction unit that reproduces, in the system, a behavior of a target detected in the input image together with information about a type and an orientation of the target based on time-series data for the control instruction output from the execution unit and the measurement result from the measurement unit.Type: ApplicationFiled: December 1, 2016Publication date: August 17, 2017Applicant: OMRON CorporationInventors: Katsushige OHNUKI, Yosunori SAKAGUCHI, Haruna SHIMAKAWA
-
Publication number: 20170236263Abstract: A system and method for scoring trained probes for use in analyzing one or more candidate poses of a runtime image is provided. A set of probes with location and gradient direction based on a trained model are applied to one or more candidate poses based upon a runtime image. The applied probes each respectively include a discrete set of position offsets with respect to the gradient direction thereof. A match score is computed for each of the probes, which includes estimating a best match position for each of the probes respectively relative to one of the offsets thereof, and generating a set of individual probe scores for each of the probes, respectively at the estimated best match position.Type: ApplicationFiled: February 10, 2017Publication date: August 17, 2017Inventor: Nathaniel R. Bogan
-
Publication number: 20170236264Abstract: A luminance-chrominance calibration production line system includes: a rail; multiple stations disposed along the rail and including multiple first darkroom stations; multiple image acquisition apparatuses respectively disposed in the first darkroom stations and for capturing different color images sequentially displayed by a to-be-calibrated LED display module loaded on the rail to acquire color image data; and a rail computer system for controlling a transport movement on the rail and controlling the to-be-calibrated LED display module to display the different color images, and being signally connected to the image acquisition apparatuses to obtain the color image data.Type: ApplicationFiled: May 3, 2017Publication date: August 17, 2017Inventor: XINGMEI ZHAO
-
Publication number: 20170236265Abstract: Disclosed herein are methods and systems for highlighting box surfaces and edges in mobile box dimensioning. An embodiment takes the form of a method that includes obtaining a three-dimensional (3D) point cloud from a depth sensor when the depth sensor is positioned such that an aiming indicator appears on a first surface of an object; processing the 3D point cloud to identify an extent of the first surface; further processing the 3D point cloud to identify a second surface that is adjacent and normal to the first surface, and to identify an extent of the second surface; and displaying at least part of the 3D point cloud via a user interface, including displaying the identified first surface in a first color and the identified second surface in a second color different from the first color.Type: ApplicationFiled: February 11, 2016Publication date: August 17, 2017Inventors: HAO ZHENG, ZHIHENG JIA, DAVID S. KOCH
-
Publication number: 20170236266Abstract: This invention provides a system and method for detecting and imaging specular surface defects on a specular surface that employs a knife-edge technique in which the camera aperture or an external device is set to form a physical knife-edge structure within the optical path that effectively blocks reflected rays from an illuminated specular surface of a predetermined degree of slope values and allows rays deflected at differing slopes to reach the vision system camera sensor. The light reflected from the flat part of the surface is mostly blocked by the knife-edge. Light reflecting from the sloped parts of the defects is mostly reflected into the entrance aperture. The illumination beam is angled with respect to the optical axis of the camera to provide the appropriate degree of incident angle with respect to the surface under inspection. The surface can be stationary or in relative motion with respect to the camera.Type: ApplicationFiled: November 11, 2016Publication date: August 17, 2017Inventors: Fariborz Rostami, John F. Filhaber, Feng Qian
-
Publication number: 20170236267Abstract: Disclosed herein is an apparatus including an imaging unit configured to image a region in which a holding unit is moved by operation of a moving unit, a basic image storage unit configured to store a basic image corresponding to proper operation of the holding unit or an action unit, and a controller configured to compare an image imaged by the imaging unit with the basic image stored by the basic image storage unit, and control the moving unit or the action unit such that the two images coincide with each other.Type: ApplicationFiled: February 3, 2017Publication date: August 17, 2017Inventors: Katsuharu Negishi, Kazunari Tanaka
-
Publication number: 20170236268Abstract: An apparatus comprises an unit configured to obtain an image of an assembled object that is constituted by first and second objects that have been assembled; an unit configured to obtain a three-dimensional shape model of the assembled object that has at least one area to which an attribute that corresponds to the first object or the second object is added; an unit configured to obtain a position and orientation of the assembled object based on the image; an unit configured to obtain, from the three-dimensional shape model of the position and orientation, first and second evaluation values that are for evaluating a state of assembly in areas that correspond to the first and second objects; and an unit configured to determine whether or not the assembly was successful based on the first and second evaluation values.Type: ApplicationFiled: February 10, 2017Publication date: August 17, 2017Inventor: Daisuke Watanabe
-
Publication number: 20170236269Abstract: To facilitate setting of a parameter at the time of generating an inspection image from an image acquired by using a photometric stereo principle. A photometric processing part generates an inspection image based on a plurality of luminance images acquired by a camera. A display control part and a display part switch and display the luminance image and the inspection image, or simultaneously display these images. An inspection tool setting part adjusts a control parameter of the camera and a control parameter of an illumination apparatus. Further, when the control parameter is adjusted, the display control part updates the image being displayed on the display part to an image where the control parameter after the change has been reflected.Type: ApplicationFiled: May 4, 2017Publication date: August 17, 2017Applicant: Keyence CorporationInventor: Daisuke Ando
-
Publication number: 20170236270Abstract: A parts inspection system for automated video inspection for quality control processes. The inspection system is particularly adapted for rotationally symmetrical workpieces including small arms ammunition cartridges. The system provides a first array of light sources oriented radially around the workpiece path presenting zones of illumination on the workpiece at discrete radial positions. A second illuminator is in the form of linear arrays of light emitting elements oriented along linear arrays. A camera oriented to observe images of light provided by the first and second arrays records video images of the workpieces for use in resolving criteria of acceptable and unacceptable parts. An escapement mechanism moves acceptable and rejected parts into different parts streams.Type: ApplicationFiled: August 12, 2015Publication date: August 17, 2017Inventors: Mark Hanna, Thomas Sande, Joe Deacons, Peter Gerow
-
Publication number: 20170236271Abstract: The present invention relates to a classification apparatus for pathologic diagnosis of a medical image and a pathologic diagnosis system using the same. According to the present invention, there is provided a classification apparatus for pathologic diagnosis of a medical image, including: a feature extraction unit configured to extract feature data for an input image using a feature extraction variable; a feature vector transformation unit configured to transform the extracted feature data into a feature vector using a vector transform variable; and a vector classification unit configured to classify the feature vector using a classification variable, and to output the results of the classification of pathologic diagnosis for the input image; wherein the feature extraction unit, the feature vector transformation unit and the vector classification unit are trained based on a first tagged image, a second tagged image, and an image having no tag information.Type: ApplicationFiled: September 8, 2015Publication date: August 17, 2017Applicant: LUNIT INC.Inventors: Hyo-eun KIM, Sang-heum HWANG, Seung-wook PAEK, Jung-in LEE, Min-hong JANG, Dong-geun Yoo, Kyung-hyun PAENG, Sung-gyun PARK
-
Publication number: 20170236272Abstract: A surgical instrument navigation system is provided that visually simulates a virtual volumetric scene of a body cavity of a patient from a point of view of a surgical instrument residing in the cavity of the patient, wherein the surgical instrument, as provided, may be a steerable surgical catheter with a biopsy device and/or a surgical catheter with a side-exiting medical instrument, among others. Additionally, systems, methods and devices are provided for forming a respiratory-gated point cloud of a patient's respiratory system and for placing a localization element in an organ of a patient.Type: ApplicationFiled: October 6, 2016Publication date: August 17, 2017Inventors: Troy L. HOLSING, Mark HUNTER
-
Publication number: 20170236273Abstract: A remote image transmission system, a display apparatus, and a guide displaying method of a display apparatus are provided. The remote image transmission system includes: an image capturing apparatus including a plurality of image pickup devices that are spaced from each other, the image capturing apparatus being configured to transmit, to a display apparatus, a plurality of images which are captured by the plurality of image pickup devices; and a display apparatus configured to generate a guide object indicating physical information of a captured object by using the plurality of received images, and to display the generated guide object and at least one of the plurality of images.Type: ApplicationFiled: January 11, 2017Publication date: August 17, 2017Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Min-soeng KIM, Joo-yoo KIM, Yong KIM, Soo-wan KIM, Eung-sun KIM, Jae-geol CHO, Tae-hwa HONG
-
Publication number: 20170236274Abstract: According to the embodiments, a medical image diagnosis apparatus which is accessible to an external medical scanner includes processing circuitry. The processing circuitry acquires from the medical scanner medical image data by imaging a subject based on a protocol which is a predetermined imaging procedure. The processing circuitry acquires from a sensor biometric information of at least when the subject is being photographed. The processing circuitry extracts characteristic information of the subject by processing the biometric information acquired up until a previous study. The processing circuitry assists a design of a protocol for a current study based on the characteristic information of the subject.Type: ApplicationFiled: February 7, 2017Publication date: August 17, 2017Applicant: Toshiba Medical Systems CorporationInventors: Hideaki ISHII, Manabu Hiraoka
-
Publication number: 20170236275Abstract: Disclosed are an image processing apparatus, an image processing method and a recording medium thereof, the image processing apparatus including: a storage configured to store standard information about at least one anatomical entity; and at least one processor configured to detect regions corresponding to a plurality of anatomical entities based on a medical image obtained by scanning an object including the plurality of anatomical entities, to estimate a volume of a first anatomical entity at a predetermined point in time based on object information measured from the detected regions of the anatomical entity and the standard information stored in the storage, and to provide information about condition of the first anatomical entity based on the estimated volume. Thus, it is possible to make a diagnosis more simply and accurately by determining condition information of an anatomical entity at a point in time for the diagnosis based on a randomly taken medical image.Type: ApplicationFiled: February 13, 2017Publication date: August 17, 2017Inventors: Yun-sub JUNG, Gye-hyun KIM, Jae-sung LEE, Yong-sup PARK, Ji-hun OH
-
Publication number: 20170236276Abstract: First and second image obtaining units respectively obtain a plurality of first projection images and a plurality of second projection images by tomosynthesis imaging operations according to first and second imaging conditions. A reconstructing unit reconstructs the plurality of first and second projection images employing processes of a reconstruction process that includes a filtering process other than the filtering process, to generate a plurality of first tomographic images and a plurality of second tomographic images for each of a plurality of cross sectional planes within a subject. A subtraction processing unit generates tomographic subtraction images from the first and second tomographic images. A filtering processing unit administers filtering processes on the tomographic subtraction images, to generate processed tomographic subtraction images.Type: ApplicationFiled: February 13, 2017Publication date: August 17, 2017Applicant: FUJIFILM CorporationInventors: Wataru FUKUDA, Junya MORITA
-
Publication number: 20170236277Abstract: According to some aspects, an information processing apparatus is provided. The information processing apparatus includes at least one processor configured to receive an image of a plurality of images. The information processing apparatus further includes at least one storage medium configured to store processor-executable instructions that, when executed by the at least one processor, perform a method. The method includes setting at least one axial direction in the image, wherein the image includes an analysis target. The method further includes determining motion information for the analysis target by analyzing motion of the analysis target to identify, with respect to the at least one axial direction, at least one of a motion amount of the analysis target and a motion direction of the analysis target.Type: ApplicationFiled: August 4, 2015Publication date: August 17, 2017Applicant: Sony CorporationInventors: Shiori OSHIMA, Kazuhiro NAKAGAWA, Eriko MATSUI
-
Publication number: 20170236278Abstract: A system and method for applying an ensemble of segmentations to a tissue sample at a blob level and at an image level to determine if the tissue sample is representative of cancerous tissue. The ensemble of segmentations at the image level is used to accept or reject images based upon the segmentation quality of the images and both the blob level segmentation and the image level segmentation are used to calculate a mean nuclear volume to discriminate between cancer and normal classes of tissue samples.Type: ApplicationFiled: August 24, 2015Publication date: August 17, 2017Inventors: Peter Randolph Mouton, Dmitry Goldgof, Lawrence O. Hall, Baishali Chaudhury
-
Publication number: 20170236279Abstract: According to an embodiment, an image analyzing device includes a first acquirer, a constructor, a first calculator, a second calculator, and a third calculator. The first acquirer is configured to acquire image information on a joint of a subject and bones connected to the joint. The constructor is configured to construct a three-dimensional shape of the bones and the joint, and relation characteristics between a load and deformation in the bones and the joint from the image information. The first calculator is configured to calculate a positional relation between the bones connected to the joint. The second calculator is configured to calculate acting force of a muscle acting on the bones connected to the joint based on the positional relation. The third calculator is configured to calculate first stress acting on the joint based on the three-dimensional shape, the relation characteristics, and the acting force.Type: ApplicationFiled: May 2, 2017Publication date: August 17, 2017Applicant: Kabushiki Kaisha ToshibaInventors: Junichiro OOGA, Kenji HIROHATA, Shinya HIGASHI
-
Publication number: 20170236280Abstract: An endoscope apparatus includes: a detector configured to detect a movement of an organ that moves a subject and output a detection signal; an image pickup portion configured to pick up an image of the subject and output a plurality of picked-up images; and a freeze processing portion configured to calculate a movement evaluation value evaluated according to the detection signal of the detector for at least some of the plurality of picked-up images, and determine a still image from the plurality of picked-up images based on the evaluated movement evaluation value.Type: ApplicationFiled: May 4, 2017Publication date: August 17, 2017Applicant: OLYMPUS CORPORATIONInventors: Tomoki IWASAKI, Ryuichi YAMAZAKI
-
Publication number: 20170236281Abstract: Systems and methods for determining bacterial load in targets and tracking changes in bacterial load of targets over time are disclosed. An autofluorescence detection and collection device includes a light source configured to directly illuminate at least a portion of a target and an area around the target with excitation light causing at least one biomarker in the illuminated target to fluoresce. Bacterial autofluorescence data regarding the illuminated portion of the target and the area around the target is collected and analyzed to determine bacterial load of the illuminated portion of the target and area around the target. The autofluorescence data may be analyzed using pixel intensity. Changes in bacterial load of the target over time may be tracked. The target may be a wound in tissue.Type: ApplicationFiled: July 24, 2015Publication date: August 17, 2017Applicant: UNIVERSITY HEALTH NETWORKInventor: Ralph DACOSTA
-
Publication number: 20170236282Abstract: Systems and methods for assessing glaucoma loss using optical coherence topography. One method according to an aspect comprises receiving optical coherence image data and assessing functional glaucoma damage from retinal optical coherence image data. In an aspect, the systems and methods can map regions and layers of the eye to determine structural characteristics to compare to functional characteristics.Type: ApplicationFiled: December 7, 2016Publication date: August 17, 2017Applicant: UNIVERSITY OF IOWA RESEARCH FOUNDATIONInventors: Michael Abramoff, Milan Sonka
-
Publication number: 20170236283Abstract: The present invention relates to an image analysis method for providing information for supporting illness development prediction regarding a neoplasm in a human or animal body. The method includes receiving for the neoplasm first and second image data at a first and second moment in time, and deriving for a plurality of image features a first and a second image feature parameter value from the first and second image data. These feature parameter values being a quantitative representation of a respective image feature. Further, calculating an image feature difference value by calculating a difference between the first and second image feature parameter value, and based on a prediction model deriving a predictive value associated with the neoplasm for supporting treatment thereof. The prediction model includes a plurality of multiplier values associated with image features.Type: ApplicationFiled: October 17, 2014Publication date: August 17, 2017Inventors: Philippe Lambin, Sara Joao Botelho de Carvalho, Ralph Theodoor Hubertina Leijenaar
-
Publication number: 20170236284Abstract: Methods for aligning images captured by aerial imaging platforms with a road network described by geo-referenced data, including the steps of: (a) identifying locations of moving vehicles in at least one image; (b) estimating a coordinate transformation that aligns the identified locations with the road network described by the geo-referenced data; and (c) outputting the estimated coordinate transformation or applying the estimated coordinate transformation to at least one image to align the image(s) with the road network described by the geo-referenced data. The methods may classify post-transformation detection as on-road detections or non-on-road detections to improve accuracy and synergistically use transformations and proximity to the road network to improve vehicle detection. The methods may identify vehicle trajectories to further improve accuracy and synergistically use transformations and proximity to the road network to improve estimates of vehicle trajectories.Type: ApplicationFiled: March 21, 2016Publication date: August 17, 2017Applicant: University of RochesterInventors: Ahmed S. Elliethy, Gaurav Sharma
-
Publication number: 20170236285Abstract: Method for determining position of a mobile device having an imaging device includes obtaining an image of a mark on a known-position object from the imaging device, the mark having an encoded position, decoding the mark to derive data about the position of the mark using a database of marks and their positions, and analyzing appearance of the mark in the image in combination with the derived data about the position of the mark to derive the position of the mobile device. Mark appearance analysis may involve analyzing an angle between an imaging direction of the imaging device and a surface of the mark.Type: ApplicationFiled: February 12, 2016Publication date: August 17, 2017Inventor: CYRIL HOURI
-
Publication number: 20170236286Abstract: Techniques for determining depth for a visual content item using machine-learning classifiers include obtaining a visual content item of a reference light pattern projected onto an object, and determining shifts in locations of pixels relative to other pixels representing the reference light pattern. Disparity, and thus depth, for pixels may be determined by executing one or more classifiers trained to identify disparity for pixels based on the shifts in locations of the pixels relative to other pixels of a visual content item depicting in the reference light pattern. Disparity for pixels may be determined using a visual content item of a reference light pattern projected onto an object without having to match pixels between two visual content items, such as a reference light pattern and a captured visual content item.Type: ApplicationFiled: March 15, 2016Publication date: August 17, 2017Inventors: Sean Ryan Francesco Fanello, Christoph Rhemann, Adarsh Prakash Murthy Kowdle, Vladimir Tankovich, David KIM, Shahram Izadi
-
Publication number: 20170236287Abstract: A digital medium environment includes an image processing application that performs object segmentation on an input image. An improved object segmentation method implemented by the image processing application comprises receiving an input image that includes an object region to be segmented by a segmentation process, processing the input image to provide a first segmentation that defines the object region, and processing the first segmentation to provide a second segmentation that provides pixel-wise label assignments for the object region. In some implementations, the image processing application performs improved sky segmentation on an input image containing a depiction of a sky.Type: ApplicationFiled: February 11, 2016Publication date: August 17, 2017Inventors: Xiaohui Shen, Zhe Lin, Yi-Hsuan Tsai, Kalyan K. Sunkavalli
-
Publication number: 20170236288Abstract: A method for determining a region of an image is described. The method includes presenting an image of a scene including one or more objects. The method also includes receiving an input selecting a single point on the image corresponding to a target object. The method further includes obtaining a motion mask based on the image. The motion mask indicates a local motion section and a global motion section of the image. The method further includes determining a region in the image based on the selected point and the motion mask.Type: ApplicationFiled: February 12, 2016Publication date: August 17, 2017Inventors: Sairam Sundaresan, Dashan Gao, Xin Zhong, Lei Zhang, Yingyong Qi
-
Publication number: 20170236289Abstract: The present invention concerns a method of segmenting a media item comprising media elements.Type: ApplicationFiled: February 16, 2016Publication date: August 17, 2017Applicant: ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL)Inventor: Radhakrishna Shri Venkata Achanta
-
Publication number: 20170236290Abstract: Techniques and systems are described for performing video segmentation using fully connected object proposals. For example, a number of object proposals for a video sequence are generated. A pruning step can be performed to retain high quality proposals that have sufficient discriminative power. A classifier can be used to provide a rough classification and subsampling of the data to reduce the size of the proposal space, while preserving a large pool of candidate proposals. A final labeling of the candidate proposals can then be determined, such as a foreground or background designation for each object proposal, by solving for a posteriori probability of a fully connected conditional random field, over which an energy function can be defined and minimized.Type: ApplicationFiled: February 16, 2016Publication date: August 17, 2017Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Alexander Sorkine Hornung, Federico Perazzi, Oliver Wang
-
Publication number: 20170236291Abstract: The drone comprises a camera (14), an inertial unit (46) measuring the drone angles, and an extractor module (52) delivering image data of a mobile capture area of reduced size dynamically displaced in a direction opposite to that of the angle variations measured by the inertial unit. Compensator means (52) receive as an input the current drone attitude data and acting dynamically on the current value (54) of an imaging parameter such as auto-exposure, white balance or autofocus, calculated as a function of the image data contained in the capture area.Type: ApplicationFiled: September 2, 2016Publication date: August 17, 2017Inventors: Axel Balley, Benoit Pochon
-
Publication number: 20170236292Abstract: A method for segmenting an image is provided. The method includes performing image classification on the image according to a position of a subject in the image; selecting, from a plurality of subject position templates, a subject position template for the image according to a result of the image classification, wherein each of the plurality of subject position templates is associated with a pre-defined position parameter, and each of the plurality of subject position templates is configured with a weight distribution field according to the pre-defined position parameter, the weight distribution field representing a probability that each pixel in the image belongs to a foreground or a background; and performing image segmentation according to the weight distribution field in the selected subject position template to segment the subject from the image.Type: ApplicationFiled: April 28, 2017Publication date: August 17, 2017Inventor: Hailue LIN
-
Publication number: 20170236293Abstract: Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels.Type: ApplicationFiled: May 3, 2017Publication date: August 17, 2017Applicant: Leap Motion, Inc.Inventors: David S. Holz, Hua Yang
-
Publication number: 20170236294Abstract: Z maps combined with a standardized stimulus in the form of a targeted arterial partial pressures of carbon dioxide provide suprisingly enhanced images for the assessment of pathological CVR. For example, the z-map assessment of patients with known steno-occlusive diseases of the cervico-cerebral vasculature showed an enhanced resolution of the presence, localization, and severity of the pathological CVR. Z-map have been found to be useful to reduce the confounding effects of test-to-test, subject-to-subject, and platform-to-platform variability for comparison of CVR images showing the importance of combining this analysis with the standardized stimulus.Type: ApplicationFiled: October 24, 2016Publication date: August 17, 2017Inventors: Joseph Fisher, Olivia Sobczyk, Adrian P. Crawley, Julian Poublanc, Kevin Sam, Daniel M. Mandell, David Mikulis, James Duffin