Patents Issued in March 7, 2019
-
Publication number: 20190073767Abstract: A facial skin mask may be generated based on isolating a head part in a captured image, removing a first pixel that is indicative of non-skin from the head part in the captured image, and removing a second pixel that is indicative of having a high velocity from the head part in the captured image. Heart rate may be detected based on the change of color of the pixels of the generated facial skin mask.Type: ApplicationFiled: July 16, 2018Publication date: March 7, 2019Inventors: Beibei Cheng, Benjamin William Walker, Jonathan Ross Hoof, Daniel Kennett, Anis Ahmad
-
Publication number: 20190073768Abstract: A medical image processing apparatus includes: a medical image acquisition unit that acquires a medical image; a medical image analysis result acquisition unit that acquires an analysis result of the medical image; a display unit that displays the medical image and the analysis result; a correspondence relationship setting unit that sets a correspondence relationship between a first analysis result of a first medical image and a second analysis result of a second medical image having different imaging conditions from the first medical image; and a display control unit that sets a display form in case of displaying the second analysis result on the first medical image using the set correspondence relationship or sets a display form in case of displaying the first analysis result on the second medical image using the set correspondence relationship.Type: ApplicationFiled: August 10, 2018Publication date: March 7, 2019Applicant: FUJIFILM CorporationInventor: Norimasa SHIGETA
-
Publication number: 20190073769Abstract: There are provided a medical image processing apparatus, an endoscope apparatus, a diagnostic support apparatus, and a medical service support apparatus capable of detecting red blood cells using an endoscope image. A medical image processing apparatus includes: a medical image acquisition unit that acquires short wavelength medical images, which are medical images including a subject image and which are obtained by imaging a subject with light in a shorter wavelength band than a green wavelength band; and a red blood cell detection unit that detects red blood cells using the short wavelength medical images. The light in the short wavelength band is, for example, light in a blue band or a violet band of a visible range. The red blood cell detection unit detects, for example, a high-frequency, granular, and high-density region as red blood cells.Type: ApplicationFiled: August 22, 2018Publication date: March 7, 2019Applicant: FUJIFILM CorporationInventor: Hiroki WATANABE
-
Publication number: 20190073770Abstract: Disease detection from medical images is provided. In various embodiments, a medical image of a patient is read. The medical image is provided to a trained anatomy segmentation network. A feature map is received from the trained anatomy segmentation network. The feature map indicates the location of at least one feature within the medical image. The feature map is provided to a trained classification network. The trained classification network was pre-trained on a plurality of feature map outputs of the segmentation network. A disease detection is received from the trained classification network. The disease detection indicating the presence or absence of a predetermined disease.Type: ApplicationFiled: September 6, 2017Publication date: March 7, 2019Inventors: Mehdi Moradi, Chun Lok Wong
-
Publication number: 20190073771Abstract: A method for visible cephalometric measurement is provided. The method comprises: acquiring a data item to be measured according to a preset analysis method, and acquiring preset reference information; determining a measurement reference point according to the acquired preset reference information; and generating a measurement result based on the measurement reference point and the data item to be measured, and displaying it. A computer processing device and a visible cephalometric system are also provided. The computer may automatically generate measurement results according to user's selection and perform cephalometric measurement more easily, accurately, and more efficiently.Type: ApplicationFiled: August 21, 2018Publication date: March 7, 2019Inventors: Minfeng Chen, Jing Lei
-
Publication number: 20190073772Abstract: A diagnosis method performed by a computer includes: executing a process that includes specifying a first case image group which includes one or more case images which have a same abnormality as a first abnormality detected from an image of a subject among plural case images about each of plural patients, each of the plural case images indicating an image in which a progression stage of a disease is different; executing a first selection process that includes calculating a first similarity about a site where the first abnormality appears between each of the one or more case images included in the first case image group and the image of the subject, and selecting a second case image group from the first case image group in accordance with the first similarity with respect to each of the one or more case images included in the first case image group.Type: ApplicationFiled: August 29, 2018Publication date: March 7, 2019Applicant: FUJITSU LIMITEDInventors: Masaki ISHIHARA, Ryoichi Funabashi, Motoo MASUI, Atsuko Tada, RYUTA TANAKA
-
Publication number: 20190073773Abstract: An improved method for examining an article by using a vision system is presented. Also presented is a vision system for use within such a method.Type: ApplicationFiled: October 31, 2018Publication date: March 7, 2019Inventors: Andrew Meyer, Nick Tebeau, James Reed, Andy Reed, Ryan Fitz-Gerald
-
Publication number: 20190073774Abstract: An approach is provided for constructing polygons for object detection. The approach involves processing, by a computer vision system, an image to generate a cell-based parametric representation of object edges. The representation, for instance, segments the image into cells with each cell including a predicted line segment representing a portion of the object edges, and a predicted centroid of the object. The approach also involves grouping the cells into cell groups based on the predicted line segment for each cell. The approach further involves generating a line to represent each cell group based on the predicted line segment for each cell of each cell group. The approach further involves constructing the polygon to represent the corresponding object based on a half planes coincident with the predicted centroid for at least one cell. Each half plane is created by bisecting a plane with the line generated for each cell group.Type: ApplicationFiled: September 7, 2017Publication date: March 7, 2019Inventors: Richard KWANT, Anish MITTAL, David LAWLOR
-
Publication number: 20190073775Abstract: A method of object detection includes obtaining a set of images depicting overlapping regions of an area containing a plurality of objects. Each image includes input object indicators defined by input bounding boxes, input confidence level values, and object identifiers. The method includes identifying candidate subsets of input object indicators in adjacent images. Each candidate subset has input overlapping bounding boxes in a common frame of reference, and a common object identifier. The method includes adjusting the input confidence levels for each input object indicator in the candidate subsets; selecting clusters of the input object indicators satisfying a minimum input confidence threshold, having a common object identifier, and having a degree of overlap satisfying a predefined threshold; and detecting an object by generating a single output object indicator for each cluster, the output object indicator having an output bounding box, an output confidence level value, and the common object identifier.Type: ApplicationFiled: September 7, 2017Publication date: March 7, 2019Inventors: Joseph Lam, Xinyi Gong
-
Publication number: 20190073776Abstract: An image processing apparatus includes an acquisition unit that acquires a plurality of pieces of tomographic data indicating tomographic information on substantially the same part of a subject to be inspected, a threshold calculation unit that calculates a threshold from tomographic data associated with a target pixel for which motion contrast data is to be calculated of the plurality of pieces of tomographic data, and a pixel value calculation unit that calculates the pixel value of the target pixel of a motion contrast image based on the threshold and the motion contrast data calculated from the tomographic data associated with the target pixel.Type: ApplicationFiled: August 31, 2018Publication date: March 7, 2019Inventors: Tomasz Dziubak, Yasuhisa Inao, Marek Rozanski, Tomasz Bajraszewski
-
Publication number: 20190073777Abstract: An optical tracking system comprises a marker part, an image forming part, and a processing part. The marker part includes a pattern having particular information and a first lens which is spaced apart from the pattern and has a first focal length. The image forming part includes a second lens having a second focal length and an image forming unit which is spaced apart from the second lens and forms an image of the pattern by the first lens and the second lens. The processing part determines the posture of the marker part from a coordinate conversion formula between a coordinate on the pattern surface of the pattern and a pixel coordinate on the image of the pattern, and tracks the marker part by using the determined posture of the marker part. Therefore, the present invention can accurately track a marker part by a simpler and easier method.Type: ApplicationFiled: November 2, 2018Publication date: March 7, 2019Applicants: KOH YOUNG TECHNOLOGY INC., KYUNGPOOK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATIONInventors: Hyun Ki LEE, You Seong CHAE, Min Young KIM
-
Publication number: 20190073778Abstract: A method for providing a position-corrected image to an HMD and a method for displaying the position-corrected image on the HMD, and a HMD that displays a position-corrected image using the same are provided. A method for providing a position-corrected image to a head-mounted display (HMD) according to an exemplary embodiment of the present invention includes: extracting an object distance to a target object from image information; acquiring rotation information according to head motion of a user of the HMD; calculating a position correction value of an image by using the object distance and the rotation information; and converting the image according to the position correction value and providing the position-corrected image to the HMD.Type: ApplicationFiled: November 2, 2017Publication date: March 7, 2019Inventor: Sang Ho LEE
-
Publication number: 20190073779Abstract: An image processing apparatus includes first alignment means configured to perform an alignment in a horizontal direction on a plurality of two-dimensional tomographic images based on measurement light controlled to scan an identical position of an eye according to a first method, and second alignment means configured to perform an alignment in a depth direction on the plurality of two-dimensional tomographic images according to a second method that is different from the first method.Type: ApplicationFiled: September 5, 2018Publication date: March 7, 2019Inventors: Yoshihiko Iwase, Osamu Sagano, Makoto Sato, Hiroki Uchida
-
Publication number: 20190073780Abstract: An image processing apparatus includes an obtaining unit configured to obtain a first two-dimensional tomographic image and a second two-dimensional tomographic image, the first two-dimensional tomographic image and the second two-dimensional tomographic image being obtained based on measurement light controlled to scan an identical position of an eye, a selection unit configured to select a positional deviation amount between a layer boundary of the first two-dimensional tomographic image and a layer boundary of the second two-dimensional tomographic image in partial regions of a plurality of regions dividing the first two-dimensional tomographic image in a horizontal direction, and an alignment means configured to perform an alignment on the first two-dimensional tomographic image and the second two-dimensional tomographic image based on a positional deviation amount selected by the selection unit.Type: ApplicationFiled: September 5, 2018Publication date: March 7, 2019Inventors: Yoshihiko Iwase, Osamu Sagano, Makoto Sato, Hiroki Uchida
-
Publication number: 20190073781Abstract: A three-dimensional distance measurement apparatus include: a plurality of light sources 11 that irradiate light onto the subject; a light emission control unit 12 that controls light emission from a plurality of light sources; a light-receiving unit 13 that detects reflection light from the subject; a distance-calculating unit 14 that calculates a distance to the subject on the basis of a transmission time of reflection light; and an image processing unit 15 that creates a distance image of the subject on the basis of calculated distance data. The plurality of irradiation areas 3 onto which light from the light sources are irradiated are arranged to partially overlap only with the neighboring ones. The light emission control unit 12 individually turns on or off the light sources 11 or individually adjusts the emitted light amounts.Type: ApplicationFiled: May 21, 2018Publication date: March 7, 2019Inventors: Katsuhiko IZUMI, Naoya MATSUURA, Toshimasa KAMISADA
-
Publication number: 20190073782Abstract: An image processing apparatus includes an image acquisition part that acquires a plurality of different measured images, a modeling part that identifies, for each pixel, a modeled parameter approximating an approximation function of a data sequence where pixel values of pixels corresponding to the respective measured images are placed in an order of capturing, a reconstructed image generation part that generates reconstructed images which are images corresponding to the respective measured images and reconstructed with an approximation value of each pixel identified based on the modeled parameter of each pixel, and an image changing part that changes the pixel values of the measured images based on statistics of the pixel values of the measured images and that of the corresponding reconstructed images.Type: ApplicationFiled: August 29, 2018Publication date: March 7, 2019Applicant: MITUTOYO CORPORATIONInventor: Shinpei MATSUURA
-
Publication number: 20190073783Abstract: A method for monitoring headway to an object performable in a computerized system including a camera mounted in a moving vehicle. The camera acquires in real time multiple image frames including respectively multiple images of the object within a field of view of the camera. An edge is detected in in the images of the object. A smoothed measurement is performed of a dimension the edge. Range to the object is calculated in real time, based on the smoothed measurement.Type: ApplicationFiled: November 7, 2018Publication date: March 7, 2019Inventors: Gideon P. Stein, Andras D. Ferencz, Ofer Avni
-
Publication number: 20190073784Abstract: An image processing apparatus includes an image acquisition part that acquires a plurality of measured images by capturing an object to be measured; a modeling part that identifies a modeled parameter based on the measured images; an intermediate image generation part that generates an intermediate image for generating a geometry image indicating a geometry of the object to be measured based on the modeled parameter; a noise threshold image generation part that generates a noise threshold image by identifying a noise threshold value of each pixel in the intermediate image using statistics indicating an error between the pixel values of pixels included in the data sequence and approximation values of pixels identified based on the modeled parameter for each pixel; and a noise removing part that performs thresholding on the intermediate image using the noise threshold image.Type: ApplicationFiled: August 29, 2018Publication date: March 7, 2019Applicant: MITUTOYO CORPORATIONInventor: Shinpei MATSUURA
-
Publication number: 20190073785Abstract: A camera apparatus is provided for detecting a stream of objects moving relative to the camera apparatus having a plurality of individual cameras that each have an image sensor for recording frames, wherein the frames overlap one another in part, having an evaluation unit for compiling frames, and having a geometry detection sensor for detecting geometrical data of the objects. The evaluation unit is here configured to generate an object image assembled from frames of an individual object of the stream of objects, with the selection of the participating frames and/or the assembly taking place on the basis of the geometrical data.Type: ApplicationFiled: July 24, 2018Publication date: March 7, 2019Inventors: Carl HAFNER, Stephan WALTER
-
Publication number: 20190073786Abstract: A frame rendering method used in a head-mounted device that includes the steps outlined below. Input frames are received. First and second orientation information corresponding to a first and a second input frames are retrieved from a motion sensor. Predicted orientation information of a predicted orientation corresponding to a target time spot is generated according to the first and the second orientation information. Orientation calibration is performed on the first and the second input frames according to the first and the second orientation information to respectively generate a first and a second calibrated frames corresponding to the predicted orientation. One of a plurality of extrapolated frames corresponding to the target time spot is generated according to the first calibrated frame and the second calibrated frame.Type: ApplicationFiled: August 30, 2018Publication date: March 7, 2019Inventors: Yu-You WEN, Chun-Hao HUANG
-
Publication number: 20190073787Abstract: Described are methods and systems for combining sparse two-dimensional (2D) and dense three-dimensional (3D) tracking of objects. A 3D sensor coupled to a computing device captures 3D scans of a physical object, including related pose information, and one or more color images corresponding to each 3D scan. For each 3D scan: the computing device establishes initial sparse 2D correspondences between a current loose frame and one or more of: a last tracked loose frame or a current keyframe. The computing device determines an approximate pose based upon the initial sparse 2D correspondences. The computing device establishes initial dense 3D correspondences between the current loose frame and an anchor frame, and combines the initial sparse 2D correspondences and the initial dense 3D correspondences to generate an estimated pose of the object in the scene.Type: ApplicationFiled: September 6, 2018Publication date: March 7, 2019Inventors: Ken Lee, Huy Bui, Xin Hou, Craig Cambias
-
Publication number: 20190073788Abstract: A method includes detecting a first object entering a first video frame of a plurality of video frames of a view of a geolocation and determining, from the plurality of video frames, that the first object has stopped in an area of the geolocation for at least a threshold amount of time. The method also includes detecting the first object leaving a second video frame of the plurality of video frames, and identifying, by a computer processing device, the area of the geolocation as a region of interest based on the detecting the first object leaving.Type: ApplicationFiled: November 6, 2018Publication date: March 7, 2019Inventor: Renat Idrisov
-
Publication number: 20190073789Abstract: Method for collaborative observation between a local targeting device and a distant targeting device located at different geographical positions and able to acquire images, respectively referred to as local images and distant images, the method comprising, when it is implemented by the local targeting device, an execution of a procedure for determining a position of an observed object, referred to as the local targeted object, comprising an application (403) of a method for matching points of interest representing a distant image obtained (400, 401) from the distant targeting device with points of interest (402) determined on a local image by the local targeting device.Type: ApplicationFiled: September 27, 2016Publication date: March 7, 2019Applicant: SAFRAN ELECTRONICS & DEFENSEInventors: Jacques YELLOZ, Maxime THIEBAUT, Guillaume MAGNIEZ, Marc BOUSQUET, Christophe GUETTIER
-
Automated System and Method for Determining Positional Order Through Photometric and Geospatial Data
Publication number: 20190073790Abstract: A system and method for determining positional order of vehicles across a threshold plane within a dynamic environment is provided. The system can include moving vehicles (e.g., boats) each having a GPS receiver. A reference object (e.g., an anchored boat) can have an image capturing device and a primary GPS receiver, and can be subject to movement induced by the dynamic environment. A fixed object having a known position (e.g., a government buoy) relative to the reference object define a threshold plane, which is subject to movement based on movement of the reference object. Photometric data gathered by the image capturing device and geospatial data gathered from the GPS receivers, the primary GPS receiver, and the fixed object is analyzed by a processor to determine a positional order at which each vehicle crossed the movable threshold plane.Type: ApplicationFiled: November 1, 2018Publication date: March 7, 2019Inventors: Zachary Eric Leuschner, Joshua Nathaniel Edmison, Harrison Brownley, John-Francis Mergen -
Publication number: 20190073791Abstract: The present invention provides an image display system, a terminal, a method, and a program that can quickly and accurately display an image corresponding to a particular place. An image display system according to one example embodiment of the present invention includes: an information acquisition unit that acquires information including a position and an orientation of a mobile terminal; and an image acquisition unit that, based on the position and the orientation of the mobile terminal and a position and an orientation associated with an image stored in a storage device in the past, acquires the image.Type: ApplicationFiled: February 23, 2017Publication date: March 7, 2019Applicant: NEC CORPORATIONInventor: Shizuo SAKAMOTO
-
Publication number: 20190073792Abstract: A system and method determining a camera pose. The method comprises receiving a first image and a second image, the first and second images being associated with a camera pose and a height map for pixels in each corresponding image, and determining a mapping between the first image and the second image using the corresponding height maps, the camera pose and a mapping of the second image to an orthographic view. The method further comprises determining alignment data between the first image transformed using the determined mapping and the second image and determining a refined camera pose based on the determined alignment data and alignment data associated with at least one other camera pose.Type: ApplicationFiled: August 29, 2018Publication date: March 7, 2019Inventors: Peter Alleine Fletcher, David Peter Morgan-Mar, Matthew Raphael Arnison, Timothy Stephen Mason
-
Publication number: 20190073793Abstract: An electronic apparatus is disclosed. The electronic apparatus includes an inputter configured to receive a binocular image which is a captured image of both eyes of a user and a stereo image which is an image of a direction corresponding to a gaze of the user captured at locations spaced apart from each other, and a processor configured to detect a watch point of a user in the stereo image by using the binocular image, obtain a disparity map in the input stereo image, and compensate the detected watch point using the obtained disparity map.Type: ApplicationFiled: September 5, 2018Publication date: March 7, 2019Inventors: Kang-won JEON, Sung-Jea KO, Tae-young NA, Mun-Cheon KANG, Sung-Ho CHAE
-
Publication number: 20190073794Abstract: A technique is provided to enable reduction in cost relating to installation of orientation targets in aerial photogrammetry. A survey data processing device includes a positioning data receiving unit, a relative orientation unit, an absolute orientation unit, and an adjustment calculation executing unit. The positioning data receiving unit receives positioning data obtained by tracking and positioning a reflective prism of an aerial vehicle by a total station. The aerial vehicle also has a camera. The relative orientation unit calculates relative exterior orientation parameters of the camera by relative orientation using photographed images taken by the camera. The absolute orientation unit provides a true scale to the relative exterior orientation parameters by absolute orientation using the positioning data and the relative exterior orientation parameters.Type: ApplicationFiled: August 21, 2018Publication date: March 7, 2019Applicant: TOPCON CORPORATIONInventors: Takeshi SASAKI, Nobuyuki FUKAYA, Nobuyuki NISHITA
-
Publication number: 20190073795Abstract: Provided is a calibration device for an optical device including a two-dimensional image conversion element having a plurality of pixels and including an optical system that forms an image-forming relationship between the image conversion element and a three-dimensional world coordinate space, the calibration device including: a computer, wherein the computer is configured to: obtain calibration data representing the correspondence between two-dimensional pixel coordinates of the image conversion element and three-dimensional world coordinates of the world coordinate space; and fit a camera model representing the direction of a principal ray in the world coordinate space, corresponding to the pixel coordinates, as a function of the pixel coordinates, to the calibration data obtained, thereby calculating parameters of the camera model.Type: ApplicationFiled: November 7, 2018Publication date: March 7, 2019Applicant: OLYMPUS CORPORATIONInventor: Toshiaki MATSUZAWA
-
Publication number: 20190073796Abstract: In one embodiment, a method includes generating a geometrical arrangement of a surrounding area, the geometrical arrangement describing a location of a first set of visual features in the space of the surrounding area; determining parameters of a camera, the parameters of the camera indicating one or more of an approximate location, orientation, or optical properties of the camera; applying determined parameters of the camera to the geometrical arrangement of the surrounding area to display the first set of visual features on a feature image; superimposing a second set of visual features extracted from an image recorded with the camera on the feature image; determining a measure of concordance between the locations of the first and second sets of visual features in the feature image; and if the measure of concordance has passed a limit value, using determined parameters of the camera as actual parameters of the camera.Type: ApplicationFiled: November 26, 2018Publication date: March 7, 2019Inventors: Jan Herling, Wolfgang Broll
-
Publication number: 20190073797Abstract: Method and apparatus for full color data processing for 3D objects are provided. The method includes: performing a layering process on a target object to determine slice-layer data of each layer, wherein the slice-layer data includes layer-color data and layer-structure data, the layer-color data represents color information of the target object, and the layer-structure data represents a printing location of the target object; and analyzing the layer-color data and the layer-structure data when the layer-color data is consistent with background color data of the target object and analyzing the layer-color data when the layer-color data is inconsistent with the background color data of the target object, thereby determining a layer color of the target object and determining printing information of the target object.Type: ApplicationFiled: November 1, 2018Publication date: March 7, 2019Inventors: Wei CHEN, Xiaokun CHEN, Dongqing XIANG
-
Publication number: 20190073798Abstract: Disclosed herein are methods and systems for real-time holographic augmented reality image processing. The processing includes the steps of receiving, at a cluster of servers and from an image capturing component, real-time image data; extracting one or more objects or a scene from the real-time image data based on results from real-time adaptive learning and one or more object/scene extraction parameters; extracting one or more human objects from the real-time image data based on results from real-time adaptive human learning and one or more human extraction parameters, receiving augmented reality (AR) input data; and creating holographic AR image data by projecting, for each image, the extracted object or scene, the extracted human object, and the AR input data using a multi-layered mechanism based on projection parameters. The real-time adaptive learning comprises object learning, object recognition, object segmentation, scene learning, scene recognition, scene segmentation, or a combination thereof.Type: ApplicationFiled: October 6, 2018Publication date: March 7, 2019Inventor: Eliza Yingzi Du
-
Publication number: 20190073799Abstract: Disclosed herein is a method for lossless compression and regeneration of digital design data in a manner maintaining the native formats outputted by modeling software used with prime focus on reduction in file size, portability, interchangeability of file storage format and providing database management functions while being implemented as a plug-and-play add-on utility to existing modeling software. Feature-based extraction of design attributes serves as a core of this inventive method and software utility based thereon.Type: ApplicationFiled: March 23, 2016Publication date: March 7, 2019Inventor: Amar Phatak
-
Publication number: 20190073800Abstract: A system for simplifying the operation of a household appliance includes a status transmitter for transmitting status data, which describes at least one aspect of a status of the household appliance, and a status receiver for receiving the status data from the status transmitter. A processing unit selects or creates visualizable data on the basis of the received status data, and a display device, which is configured to be placed in front of an eye of a user of the system, then displays the visualizable data. A method for simplifying the operation of a household appliance is also provided.Type: ApplicationFiled: October 28, 2016Publication date: March 7, 2019Inventor: PETER LOCHNY
-
Publication number: 20190073801Abstract: One or more embodiments of the disclosure include a customized image character system that generates and provides customized image characters across computing devices. In particular, in one or more embodiments, the customized image character system provides a color modifier control as part of a messaging application for drafting digital messages with standardized image characters. The customized image character system can detect user selection of a standardized image character (e.g., an emoji image) and a new color (via the color modifier control) and dynamically generate a customized image character (e.g., a customized emoji image). The customized image character system can also send a digital message to a second client device such that the second client device displays the digital message with the customized image character.Type: ApplicationFiled: September 7, 2017Publication date: March 7, 2019Inventor: Dmitri Stukalov
-
Publication number: 20190073802Abstract: A system and computer-implemented method for improving the quality of images, such as images used in medical diagnosis. In a first technique, sinogram data, intermediate volumes, and transmission CT-based attenuation correction files are created. Scatter correction is performed, and a scatter correction map is created. Misregistration offsets are measured, and a correction registration mask is created using volume reprojection. The two masks are combined and applied to create the enhanced image data. In a second technique, a difference may be determined for each parent and child pixel pair, and an average difference may be determined for all pairs. For each pair which is under the average difference, the child may be eliminated and replaced with a new child. This process may be iteratively repeated, with the image data being incrementally enhanced with each iteration. For both techniques, the enhanced image data may be communicated to an interpretive application for display.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Applicant: Cardiovascular Imaging Technologies, L.L.C.Inventors: James Arthur Case, Paul O'Connell Case, Timothy Murray Bateman, Paul Andrew Helmuth
-
Publication number: 20190073803Abstract: A method and system for processing medical image data are disclosed. In an embodiment, the method includes providing measurement data of a to be reproduced body region of a patient; reconstructing, using a reconstruction algorithm, a first image volume representing the body region from the measurement data; identifying a subregion representing a partial volume of the first image volume, confining the subregion within the first image volume, and assigning a specific tissue structure, differing from a remaining volume of the first image volume, to the subregion; determining, based upon the specific tissue structure of the subregion, at least one reconstruction parameter varying in comparison to the first image volume; reconstructing, from the measurement data linked with the subregion and based upon the at least one reconstruction parameter, a second image volume; and contouring, within at least the second image volume, structure boundaries between different anatomical structures of the patient.Type: ApplicationFiled: August 30, 2018Publication date: March 7, 2019Applicant: Siemens Healthcare GmbHInventor: Andre RITTER
-
Publication number: 20190073804Abstract: A method is for recognizing artifacts in computed tomography image data. In an embodiment, the method includes acquisition of projection measurement data from a region under examination of a subject to be examined; reconstruction of image data on the basis of the projection measurement data; checking for the presence of an artifact in the image data using a trained recognition unit; recognition of an artifact type of an artifact that is present using a trained recognition unit; and output of the recognized artifact type.Type: ApplicationFiled: August 29, 2018Publication date: March 7, 2019Applicant: Siemens Healthcare GmbHInventor: Thomas ALLMENDINGER
-
Publication number: 20190073805Abstract: An apparatus includes: an acquisition unit configured to acquire pieces of three-dimensional data of a subject eye obtained at different times, the three-dimensional data including pieces of two-dimensional data obtained at different positions; a first planar alignment unit configured to perform first planar alignment including alignment between the pieces of three-dimensional data in a plane orthogonal to a depth direction of the subject eye; a first depth alignment unit configured to perform first depth alignment including alignment between pieces of two-dimensional data in at least one piece of three-dimensional data among the pieces of three-dimensional data and further including alignment between the pieces of three-dimensional data in the depth direction; and a generation unit configured to generate interpolation data of at least one piece of three-dimensional data among the pieces of three-dimensional data by using a result of the first planar alignment and a result of the first depth alignment.Type: ApplicationFiled: September 4, 2018Publication date: March 7, 2019Inventors: Yoshihiko Iwase, Hiroki Uchida
-
Publication number: 20190073806Abstract: A method and system for image reconstruction are provided. Multiple coil images may be obtained. A first reconstructed image based on the multiple coil images may be reconstructed based on a first reconstruction algorithm. A second reconstructed image based on the multiple coil images may be reconstructed based on a second reconstruction algorithm. Correction information about the first reconstructed image may be generated based on the first reconstructed image and the second reconstructed image. A third reconstructed image may be generated based on the first reconstructed image and the correction information about the first reconstructed image.Type: ApplicationFiled: October 29, 2018Publication date: March 7, 2019Applicant: UIH AMERICA, INC.Inventors: Renjie HE, Yu DING, Qi LIU
-
Publication number: 20190073807Abstract: A computer-implemented method for generating geocoded user information is disclosed. The method comprises searching user data across multiple different data corpuses for entries having location-related information and determining locations for the location-related information. The method further comprises generating a map showing a current location of a mobile device along with representations of the entries having location-related information, at the determined locations, for entries from the multiple different data corpuses.Type: ApplicationFiled: August 20, 2018Publication date: March 7, 2019Inventors: Adam Bliss, David P. Conway
-
Publication number: 20190073808Abstract: A terminal apparatus includes a display device and circuitry. The circuitry receives a predetermined instruction. In response to receiving the predetermined instruction, the circuitry changes attribute information of a stroke image associated with the predetermined instruction to specific attribute information, the specific attribute information identifying information in an area defined by the stroke image as information to be extracted. The circuitry controls the display device to display the stroke image associated with the predetermined instruction as a stroke image having the specific attribute information.Type: ApplicationFiled: August 17, 2018Publication date: March 7, 2019Inventor: Masaaki KAGAWA
-
Publication number: 20190073809Abstract: A program and computer apparatus to execute a method including: placing a rectangular parallelepiped object having a given attribute and a display mode according to the given attribute in a virtual space; identifying a first display mode of a face of a placed object which is not in contact with a different placed object according to an attribute of the placed object; identifying, with respect to at least one of a plurality of placed objects which are adjacent to each other and have different attributes, a second display mode of a face thereof which is not in contact with the different placed object according to an attribute of the placed object and an attribute of an adjacent placed object; and drawing, for displaying a placed object on a display screen, according to any one of the first display mode identified and the second display mode identified.Type: ApplicationFiled: November 6, 2018Publication date: March 7, 2019Applicant: SQUARE ENIX CO., LTD.Inventor: Hideyuki TAKAHASHI
-
Publication number: 20190073810Abstract: The purpose of the present invention is to correct more accurate marketing information. A flow line display system of the present invention includes image-capturing unit, information operation device and display unit. The image-capturing unit captures an image. The information operation device detects an object from the image and identifies a flow line of the object, an orientation of the object, and a time related to the orientation. The display unit displays the orientation of the object and the time related to the orientation together with the flow line of the object.Type: ApplicationFiled: March 23, 2017Publication date: March 7, 2019Applicant: NEC CORPORATIONInventors: Shigetsu SAITO, Jun KOBAYASHI
-
Publication number: 20190073811Abstract: Systems and techniques are disclosed for automatically creating a group shot image by intelligently selecting a best frame of a video clip to use as a base frame and then intelligently merging features of other frames into the base frame. In an embodiment, this involves determining emotional alignment scores and eye scores for the individual frames of the video clip. The emotional alignment scores for the frames are determined by assessing the faces in each of the frames with respect to an emotional characteristic (e.g., happy, sad, neutral, etc.). The eye scores for the frames are determined based on assessing the states of the eyes (e.g., fully open, partially open, closed, etc.) of the faces in individual frames. Comprehensive scores for the individual frames are determined based on the emotional alignment scores and the eye scores, and the frame having the best comprehensive score is selected as the base frame.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Applicant: ADOBE SYSTEMS INCORPORATEDInventors: Abhishek SHAH, Andaleeb FATIMA
-
Publication number: 20190073812Abstract: Systems and methods for low power virtual reality (VR) presence monitoring and notification via a VR headset worn by a user entail a number of aspects. In an embodiment, a person is detected entering a physical location occupied by the user of the VR headset during a VR session. This detection may occur via one or more sensors on the VR headset. In response to detecting that a person has entered the location, a representation of the person is generated and displayed to the user via the VR headset as part of the VR session. In this way, the headset user may be made aware of people in their physical environment without leaving the VR session.Type: ApplicationFiled: September 7, 2017Publication date: March 7, 2019Applicant: Motorola Mobility LLCInventors: Scott DeBates, Douglas Lautner
-
Publication number: 20190073813Abstract: An image processing apparatus comprises an image obtaining unit that obtains a captured image, an information obtaining unit that obtains analysis data recorded in correspondence with the captured image and including flag information indicating whether an object present in the captured image is a masking target, a detecting unit that detects objects from the captured image, and a mask processing unit that generates an image in which an object, among the objects detected from the captured image, which is indicated as the masking target by the flag information, is masked.Type: ApplicationFiled: August 30, 2018Publication date: March 7, 2019Inventor: Kan Ito
-
Publication number: 20190073814Abstract: A method for distributing information includes producing a symbol to be overlaid on at least one primary image presented on a first display screen, the symbol encoding a specified digital value in a set of color elements having different, respective colors. A message is received from a client device containing an indication of the specified digital value decoded by the client device upon capturing and analyzing a secondary image of the first display screen. In response to the message, an item of information relating to the primary image is transmitted to the client device, for presentation on a second display screen associated with the client device.Type: ApplicationFiled: October 28, 2018Publication date: March 7, 2019Inventors: Alex Alon, Irina Alon, Eran Katz
-
Publication number: 20190073815Abstract: The system provides movement guidance to an actor using a motion capture movement reference system. The motion capture movement reference system includes a light strip having an elongated substrate with lights positioned in series along a length of the elongated substrate and a computing device configured to program the lights with an illumination protocol. Operationally, a user inputs into the computing device one or more variables to establish a number of lights to simultaneously activate and/or a rate of activating and deactivating the lights along the length of the elongated substrate. The light strip is programmed based upon the one or more variables. When the lights are activated and deactivated along the length of the elongated substrate, an actor chases the lights.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Inventors: Jason E. Greenberg, Kristina Rae Adelmeyer, Jeff J. Swenty
-
Publication number: 20190073816Abstract: A method for managing a multi-user animation platform is disclosed. A three-dimensional space within a computer memory is modeled. An avatar of a client is located within the three-dimensional space, the avatar being graphically represented by a three-dimensional figure within the three-dimensional space. The avatar is responsive to client input commands, and the three-dimensional figure includes a graphical representation of client activity. The client input commands are monitored to determine client activity. The graphical representation of client activity is then altered according to an inactivity scheme when client input commands are not detected. Following a predetermined period of client inactivity, the inactivity scheme varies non-repetitively with time.Type: ApplicationFiled: October 15, 2018Publication date: March 7, 2019Inventor: Brian Mark SHUSTER