Patents Issued in May 12, 2016
-
Publication number: 20160133010Abstract: The invention provides a method of processing an image in a diagnostic apparatus 100 of diagnosing a disease using a captured image of an affected area, comprising: a separating step of separating the captured image into a brightness component and a color information component; and an extracting step of extracting a region to be diagnosed based on the brightness component or the color information component of the configured image to highlight likeness of the region.Type: ApplicationFiled: September 21, 2015Publication date: May 12, 2016Applicant: CASIO COMPUTER CO., LTD.Inventors: Akira HAMADA, Mitsuyasu NAKAJIMA, Masaru TANAKA, Toshitsugu SATO
-
Publication number: 20160133011Abstract: The invention provides an image processing method in a diagnostic apparatus of diagnosing a disease using a captured image of an affected area, comprising the steps of: (i) separating the captured image memorized into a brightness component and a color information component (Step S131); (ii) separating the brightness component into a base component and a detail component (Step S132); (iii) performing a highlighting process on the base component and/or the detail component (Step S133-S140); and (iv) restoring a brightness component from a highlighted base component and the detail component, and/or from the base component and a highlighted detail component, and using the restored brightness component and the color information component to generate a highlighted image (Step S141).Type: ApplicationFiled: September 21, 2015Publication date: May 12, 2016Applicant: CASIO COMPUTER CO., LTD.Inventor: Mitsuyasu NAKAJIMA
-
Publication number: 20160133012Abstract: An information processing apparatus for medical information includes a determination unit that determines whether specific information is to be associated with a medical image obtained by imaging of an object based on information concerning the imaging of the object, a period obtaining unit that obtains information indicating a period that has elapsed between a reference time concerning the specific information and a time of imaging of the medical image, and a processing unit that associates with the medical image the information indicating the period as the specific information in a case where the determination unit determines that the specific information is to be associated with the medical image.Type: ApplicationFiled: November 3, 2015Publication date: May 12, 2016Inventor: Nobu Miyazawa
-
Publication number: 20160133013Abstract: A method determining a retinal pigment epithelium identifies a plurality of regions using captured image data of the eye and fits a curve into at least some of the regions. A curve score is determined associated with the fitted curve using at least a distance between the fitted curve and at least some of the regions in which a contribution of the regions to the curve score is biased towards (asymmetric) regions below the fitted curve. These steps are repeated whereupon one of the fitted curves is selected, using the corresponding associated curve score, for classifying some of the regions as forming at least a part of a retinal pigment epithelium.Type: ApplicationFiled: November 4, 2015Publication date: May 12, 2016Inventors: ANDREW DOCHERTY, RUIMIN PAN
-
Publication number: 20160133014Abstract: An area of interest of a patient's organ may be identified based on the presence of a possible lesion during an endoscopic procedure. The location of the area of interest may then be tracked relative to the camera view being displayed to the endoscopist in real-time or near real-time during the endoscopic procedure. If the area of interest is visually marked on the display, the visual marking is moved with the area of interest as it moves within the camera view. If the area of interest moves outside the camera view, a directional indicator may be displayed to indicate the location of the area of interest relative to the camera view to assist the endoscopist in relocating the area of interest.Type: ApplicationFiled: December 28, 2015Publication date: May 12, 2016Inventors: Alan Harris Staples, II, Karen Kaye Ramsey, Bryan Michael Hunt
-
Publication number: 20160133015Abstract: Embodiments include a system for determining cardiovascular information for a patient. The system may include at least one computer system configured to receive patient-specific data regarding a geometry of the patient's heart, and create a three-dimensional model representing at least a portion of the patient's heart based on the patient-specific data. The at least one computer system may be further configured to create a physics-based model relating to a blood flow characteristic of the patient's heart and determine a fractional flow reserve within the patient's heart based on the three-dimensional model and the physics-based model.Type: ApplicationFiled: December 31, 2015Publication date: May 12, 2016Applicant: HeartFlow, Inc.Inventor: Charles A. Taylor
-
Publication number: 20160133016Abstract: A method for a rapid automated presentation of at least two radiological data sets of a patient, comprising, (a) automatically registering the data sets in 3D space; and (h) concurrently presenting substantially matching anatomical regions in each data set.Type: ApplicationFiled: January 15, 2016Publication date: May 12, 2016Inventors: Michael Slutsky, Shmuel Akerman, Reuven Shreiber
-
Publication number: 20160133017Abstract: A method and associated systems for real-time subject-driven functional connectivity analysis. One or more processors receive an fMRI time series of sequentially recorded, masked, parcellated images that each represent the state of a subject's brain at the image's recording time as voxels partitioned into a constant set of three-dimensional regions of interest. The processors derive an average intensity of each region's voxels in each image and organize these intensity values into a set of time courses, where each time course contains a chronologically ordered list of average intensity values of one region. The processors then identify time-based correlations between average intensities of each pair of regions and represent these correlations in a graphical format. As each subsequent fMRI image of the same subject's brain arrives, the processors repeat this process to update the time courses, correlations, and graphical representation in real time or near-real time.Type: ApplicationFiled: January 5, 2016Publication date: May 12, 2016Inventor: Jingyun Chen
-
Publication number: 20160133018Abstract: A system can include a model to represent a volumetric deformation of a brain corresponding to brain tissue that has been displaced by at least one of disease, surgery or anatomical changes. A fusion engine can perform a coarse and/or fine fusion to align a first image of the brain with respect to a second image of the brain after a region of the brain has been displaced and to employ the deformation model to adjust one or more points on a displacement vector extending through a displaced region of the brain to compensate for spatial deformations that occur between the first and second image of the brain.Type: ApplicationFiled: January 14, 2016Publication date: May 12, 2016Inventor: Andre G. Machado
-
Publication number: 20160133019Abstract: Method for aerial image capturing by means of an unmanned and controllable aircraft comprising a camera, more particularly a drone, during a flight manoeuvre of said aircraft, comprising continual determining of a camera position and alignment of an optical camera axis and acquiring of a series of aerial images. For each aerial image of said aerial image series, the capturing of the respective aerial image is triggered by flying through a respective image trigger region with said aircraft, wherein the location of said respective image trigger region is determined at least in each case by one trigger position assigned to said respective image trigger region and triggered subject to the alignment of the camera axis when flying through said respective image trigger region, with respect to fulfilling a defined, maximum angle deviation relative to a predetermined spatial alignment.Type: ApplicationFiled: April 3, 2014Publication date: May 12, 2016Inventors: RĂ¼diger J. WAGNER, Michael NADERHIRN
-
Publication number: 20160133020Abstract: The present invention provides an acceleration and enhancement methods for ultrasound scatterer structure visualization. The method includes: obtaining an ultrasonic image, calculating all values of the ultrasonic signal points in each mth window centered at a nth signal point to obtain a plurality of original statistical values anxm, obtaining a plurality of mth statistical values by averaging value of original statistical values in the same window, calculating a plurality of mth weighting values based on the statistical values by different weighting formulas, multiplying each weighting value with the original statistical values corresponding to the various size of windows, summing up to obtain an ultrasound structure scatterer value of the nth ultrasonic signal point, and generating an ultrasound scatterer structure image based on a matrix of the ultrasound scatterer values. The present invention further combined interpolation method can reduce the computation time and retain the 80% accuracy.Type: ApplicationFiled: May 5, 2015Publication date: May 12, 2016Inventors: Po-Hsiang Tsui, Ming-Chih Ho, Chiung-Nein Chen, Argon Chen, Jia-Jiun Chen, Yu-Hsin Wang, Kuo-Chen Huang
-
Publication number: 20160133021Abstract: An imaging position determination device includes an image reception unit that acquires an image and a position of a person within a monitoring area, an eye state detection unit that detects an open and closed state of eyes of a person from the image acquired by the image reception unit, an eye state map creation unit that creates an eye state map which shows an eye state of the person in the monitoring area based on the open and closed state of eyes of the person that is acquired by the eye state detection unit, and an adjustment amount estimation unit that determines an imaging position of the person in the monitoring area based on the eye state map that is created by the eye state map creation unit.Type: ApplicationFiled: June 17, 2014Publication date: May 12, 2016Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Youichi GOUDA, Hiroaki YOSHIO
-
Publication number: 20160133022Abstract: A method for tracking an object by an electronic device is described. The method includes detecting an object position in an initial frame to produce a detected object position. The method also includes measuring one or more landmark positions based on the detected object position or a predicted object position. The method further includes predicting the object position in a subsequent frame based on the one or more landmark positions. The method additionally includes determining whether object tracking is lost. The method also includes avoiding performing object detection for the subsequent frame in a case that object tracking is maintained.Type: ApplicationFiled: November 12, 2014Publication date: May 12, 2016Inventors: Michel Adib Sarkis, Yingyong Qi, Magdi Abuelgasim Mohamed
-
Publication number: 20160133023Abstract: A method for image processing is provided. The method includes acquiring at least one object in a recorded image, determining an orientation of the at least one acquired object, and classifying at least one acquired object, the orientation of which was determined, by comparison with a reference. The orientation is determined by calculating at least one moment of inertia of the acquired object.Type: ApplicationFiled: November 10, 2015Publication date: May 12, 2016Inventor: Herbert Kaestle
-
Publication number: 20160133024Abstract: A method and system provide light to project to an operation space so that a received image from the operation space will include, if an object is in the operation space, a bright region due to the reflection of light by the object, and identify a gesture according to the variation of a barycenter position, an average brightness, or an area of the bright region in successive images, for generating a corresponding command. Only simple operation and calculation is required to detect the motion of an object moving in the X, Y, or Z axis of an image, for identifying a gesture represented by the motion of the object.Type: ApplicationFiled: January 15, 2016Publication date: May 12, 2016Inventors: Yu-Hao HUANG, En-Feng HSU
-
Publication number: 20160133025Abstract: A method and an apparatus for detecting an interest degree of a crowd in a target position are disclosed. The interest degree detection method includes projecting a depth image obtained by photographing onto a height-top-view, the depth image including the crowd and the target position; dividing the height-top-view into cells; determining density of the crowd in each cell; determining a moving speed and a moving direction of the crowd in each cell; determining orientation of the crowd in each cell; and determining, based on the density, the moving speed, the moving direction and the orientation of the crowd, the interest degree of the crowd in each cell in the target position. According to this method, the interest degree of the crowd in the target position can be detected accurately, even at a crowded place where it is difficult to detect and track a single person.Type: ApplicationFiled: November 9, 2015Publication date: May 12, 2016Applicant: Ricoh Company, Ltd.Inventors: Xin WANG, Shengyin FAN, Gang GIAO, Qian WANG
-
Publication number: 20160133026Abstract: A non-parametric method of, and system for, dimensioning an object of arbitrary shape, captures a three-dimensional (3D) point cloud of data points over a field of view containing the object and a base surface on which the object is positioned, detects a base plane indicative of the base surface from the point cloud, extracts the data points of the object from the point cloud, processes the extracted data points of the object to obtain a convex hull, and fits a bounding box of minimum volume to enclose the convex hull. The bounding box has a pair of mutually orthogonal planar faces, and the fitting is performed by orienting one of the faces to be generally perpendicular to the base plane, and by simultaneously orienting the other of the faces to be generally parallel to the base plane.Type: ApplicationFiled: November 6, 2014Publication date: May 12, 2016Inventors: ANKUR R. PATEL, KEVIN J. O'CONNELL, CUNEYT M. TASKIRAN, JAY J. WILLIAMS
-
Publication number: 20160133027Abstract: A method and an apparatus for separating a foreground image are disclosed. The method includes obtaining an input image, and color information and depth information of the input image; roughly dividing, based on the depth information of the input image, the input image to obtain an initial three-color image; reducing or expanding, based on the color information of the input image, an unknown region in the initial three-color image to obtain an optimized three-color image; and separating the foreground image from the optimized three-color image. According to the method, the initial three-color image can be optimized based on the color information of the input image, so that a more accurate three-color image can be obtained; thus the foreground image can be accurately separated from the three-color image.Type: ApplicationFiled: November 2, 2015Publication date: May 12, 2016Applicant: Ricoh Company, Ltd.Inventors: Ying ZHAO, Gang WANG, Liyan LIU
-
Publication number: 20160133028Abstract: According to an aspect of an exemplary embodiment, an apparatus for avoiding region of interest (ROI) re-detection includes a detector configured to detect an ROI from an input medical image; a re-detection determiner configured to determine whether the detected ROI corresponds to a previously-detected ROI using pre-stored user determination information; and an ROI processor configured to perform a process for the detected ROI based on the determination.Type: ApplicationFiled: November 6, 2015Publication date: May 12, 2016Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventor: HYOUNG MIN PARK
-
Publication number: 20160133029Abstract: A palette compressed representation may be stored in the index bits, when that is possible. The savings are considerable in some embodiments. In uncompressed mode, the data uses 2304 (2048+256) bits, and in compressed mode, the data uses 1280 bits. However, with this technique, the data only uses the index bits, (e.g. 256 bits) with a 5:1 compression improvement over the already compressed representation, and with respect to the uncompressed representation it is a 9:1 compression ratio.Type: ApplicationFiled: November 10, 2014Publication date: May 12, 2016Inventor: Tomas G. Akenine-Moller
-
Publication number: 20160133030Abstract: Techniques are disclosed for color selection in a desktop publishing application. A color selection technique includes receiving a selection of an image, automatically sampling a color from a pixel of the selected image, and adding the sampled color to a color swatch in a graphical user interface. The sampled color may be the predominant color in the image (e.g., the color that appears in the greatest number of pixels), or the sampled color may be the darkest or lightest color in the image. In another embodiment, several colors (e.g., two, three, four, five, six, seven, eight, nine or ten) are sampled from different pixels of the selected image, and some or all of the sampled colors are added to the color swatch. A designer can then select the sampled color(s) from the color swatch and apply the selected color(s) to one or more elements of a layout.Type: ApplicationFiled: November 10, 2014Publication date: May 12, 2016Applicant: ADOBE SYSTEMS INCORPORATEDInventors: Sameer Manuja, Ashish Duggal
-
Publication number: 20160133031Abstract: A radiation CT apparatus that can gain a clear tomogram without a rotation axis runout and without fail through a single CT scan and a simple operation is provided. When the data on the projection with radiation collected through a CT scan is first reconstructed through an arithmetic operation by a reconstruction arithmetic operation unit 13, temporary coordinates that have been set in advance as the coordinates of the projected rotation axis so as to construct a tomogram along a predetermined sliced surface are used, this tomogram is displayed on the screen for changing the rotation axis coordinates that include the temporary coordinates, and the coordinates of the projected rotation axis are shifted by any amount in any direction through an operation on the screen so that a reconstruction arithmetic operation is again carried out in the reconstruction arithmetic operation unit 13.Type: ApplicationFiled: November 7, 2014Publication date: May 12, 2016Applicant: SHIMADZU CORPORATIONInventor: Yasuyuki Keyaki
-
Publication number: 20160133032Abstract: A device and a method for image reconstruction at different X-ray energies that make it possible to achieve image reconstruction with higher accuracy. A device for image reconstruction at different X-ray energies includes: an X-ray source 1 that irradiates a specimen to be imaged 2 with X-rays; an energy-dispersive detector 4 that detects a characteristic X-ray emitted from the specimen to be imaged 2; a signal processor that quantifies the peak of the characteristic X-ray detected by the detector 4; and an image reconstruction device that reconstructs an image on the basis of a signal from the signal processor.Type: ApplicationFiled: May 29, 2014Publication date: May 12, 2016Applicant: TOKYO METROPOLITAN INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Akira MONKAWA, Shoichi NAKANISHI, Shinya ABE, Mikiya KONDO, Koh HARADA
-
Publication number: 20160133033Abstract: The present invention relates to a method of reconstruction of an object from projections, more particularly to a quantitative reconstruction of an object from projection views of the object. For example, quantitative reconstruction of an image of a human breast from projection views generated by digital breast tomosynthesis (DBT), computed tomography (CT), or standard mammography, and use of the reconstruction to identify densest regions.Type: ApplicationFiled: June 6, 2014Publication date: May 12, 2016Inventors: Ralph Highnam, John Michael Brady, Nico Karssemeijer, Martin Yaffe
-
Publication number: 20160133034Abstract: Disclosed are a method of modeling a haptic signal from a haptic object, a display apparatus, and a driving method thereof, which realize a tactile sense having a shape and texture of a haptic object. The method includes obtaining measurement data corresponding to a shape of a texture object while moving a sensor unit with respect to a haptic object including the texture object, obtaining force measurement data corresponding to a level of pressure applied to the haptic object, calculating shape modeling data and impulse modeling data corresponding to the texture object, based on the measurement data, calculating friction force modeling data corresponding to the texture object, based on the force measurement data, generating setting information of a haptic signal corresponding to the haptic object, based on the shape modeling data, the impulse modeling data, and the friction force modeling data, and storing the setting information of the haptic signal.Type: ApplicationFiled: November 10, 2015Publication date: May 12, 2016Inventors: JiEun SON, Yongkyun CHOI, SeungHwan YOON
-
Publication number: 20160133035Abstract: A visualization system for a tracer may include a processing pipeline that may generate tracing data, preprocess the data, and visualize the data. The preprocessing step may include a mechanism to process user-defined expressions or other executable code. The executable code may perform various functions including mathematical, statistical, aggregation with other data, and others. The preprocessor may perform malware analysis, test the functionality, then implement the executable code. A user may be presented with an editor or other text based user interface component to enter and edit the executable code. The executable code may be saved and later recalled as a selectable transformation for use with other data streams.Type: ApplicationFiled: January 14, 2016Publication date: May 12, 2016Inventors: Russell S. Krajec, Alexander G. Gounares
-
Publication number: 20160133036Abstract: The present disclosure relates to methods for displaying facility information. One such method includes causing a client terminal to render on-screen floorplan data at a desired position and resolution, wherein the floorplan data is defined by a plurality of scalable resolution independent vector images, each resolution independent vector image representing a physical space in a facility. As set of rules are executed thereby to apply determined visual characteristics to one or more of the vector images, wherein each of the one or more vector images is associated with a data point in a building management system, and wherein for a given vector image the set of rules defines a relationship between observed data point values and visual characteristics to be displayed.Type: ApplicationFiled: November 11, 2015Publication date: May 12, 2016Inventors: Henry Chen, Weilin Zhang, Peter Lau
-
Publication number: 20160133037Abstract: A method and apparatus for unsupervised cross-modal medical image synthesis is disclosed, which synthesizes a target modality medical image based on a source modality medical image without the need for paired source and target modality training data. A source modality medical image is received. Multiple candidate target modality intensity values are generated for each of a plurality of voxels of a target modality medical image based on corresponding voxels in the source modality medical image. A synthesized target modality medical image is generated by selecting, jointly for all of the plurality of voxels in the target modality medical image, intensity values from the multiple candidate target modality intensity values generated for each of the plurality of voxels. The synthesized target modality medical image can be refined using coupled sparse representation.Type: ApplicationFiled: September 30, 2015Publication date: May 12, 2016Inventors: Raviteja Vemulapalli, Hien Nguyen, Shaohua Kevin Zhou
-
Publication number: 20160133038Abstract: A display device includes: an input unit which has image data inputted from an image supply device; a detection unit which detects a position of an indicator and generates indicator information including information about the detected position; a setting unit which sets a mode for processing of the indicator information to a first mode or a second mode; a drawing unit which draws a second image generated on the basis of the indicator information and superimpose the second image on a first image generated on the basis of the inputted image data; a selection unit which outputs the indicator information to the drawing unit if the first mode is set and which outputs the indicator information to the image supply device if the second mode is set; and a drawing control unit which erases the second image if a switching from the first mode to the second mode is carried out.Type: ApplicationFiled: October 12, 2015Publication date: May 12, 2016Inventors: Takashi Natori, Kyosuke Itahana
-
Publication number: 20160133039Abstract: Spatially variable data associated with a geographical region such as a map or image from multiple samples acquired by one or more airborne vehicles taken across sub-regions of the geographical region may be aggregating and displayed. High-resolution image data of a geographical region acquired by one or more airborne vehicles may be obtained. The image data may comprise images corresponding to sub-regions of the geographical region. The images may be acquired at an image resolution corresponding to a first spatial frequency. Individual images may be analyzed to determine statistical information corresponding to the sub-regions of the geographical region. The statistical information corresponding to the sub-regions of the geographical region may be provided, for presentation to a user, by resampling the statistical information based on a second spatial frequency. The second spatial frequency may be equal to or less than the first spatial frequency.Type: ApplicationFiled: November 12, 2015Publication date: May 12, 2016Inventors: Michael Ritter, Michael Milton
-
Publication number: 20160133040Abstract: A method is disclosed for reducing distortions introduced by deformation of a surface with an existing parameterization. In an exemplary embodiment, the method comprises receiving a rest pose mesh comprising a plurality of faces, a rigidity map corresponding to the rest pose mesh, and a deformed pose mesh; using the rigidity map to generate a simulation grid on the rest pose mesh, the simulation grid comprising a plurality of cells; defining a set of constraints on the simulation grid, the constraints being derived at least in part from the rigidity map; running a simulation using the simulation grid and the set of constraints to obtain a warped grid; and texture mapping the deformed pose mesh based on data from the warped grid.Type: ApplicationFiled: November 10, 2014Publication date: May 12, 2016Applicant: DISNEY ENTERPRISES, INC.Inventors: KENNETH JOHN MITCHELL, CHARALAMPOS KONIARIS, DARREN COSKER
-
Publication number: 20160133041Abstract: An apparatus and method of processing three-dimensional (3D) images on a multi-layer display may generate virtual depth information based on original depth information, and display 3D images having various depth values using the generated virtual depth information. Also, the apparatus and method may appropriately provide color information to each of a plurality of display layers, thereby preventing an original image from being damaged.Type: ApplicationFiled: October 21, 2014Publication date: May 12, 2016Inventors: Young Ran Han, Young Shin Kwak, Du Sik Park, Young Ju Jeong, Darryl Singh, Gareth Paul Bell
-
Publication number: 20160133042Abstract: An image processing apparatus generates intermediate volume data from a plurality of volume data segments obtained as time passes so as to implement high-speed volume data. A medical imaging apparatus that includes the image processing apparatus, an ultrasonic imaging apparatus, an image processing method, and a medical image generation method are disclosed. The image processing apparatus includes a displacement vector generator configured to detect corresponding voxels between reference volume data and target volume data that has been acquired at intervals of a predetermined time period, and to generate a displacement vector between the corresponding voxels; and an intermediate volume data generator configured to generate at least one piece of intermediate volume data between the reference volume data and the target volume data by using the generated displacement vector.Type: ApplicationFiled: November 6, 2015Publication date: May 12, 2016Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventor: Yun-Tae KIM
-
Publication number: 20160133043Abstract: An image processing apparatus and method. The image processing method includes a data obtaining unit for obtaining volume data that contains a target image; a depth-data obtaining unit for obtaining depth data that indicates a depth to the surface of the target image from an image plane; an image processing unit for processing the volume data into a processed volume data based on the depth-data, and obtaining a rendered image based on the processed volume data; and a display unit for displaying the rendered image.Type: ApplicationFiled: January 19, 2016Publication date: May 12, 2016Inventors: Sung-yun KIM, Jun-kyo LEE
-
Publication number: 20160133044Abstract: In one embodiment, panoramic images, images bubbles, or any two-dimensional views of three-dimensional subject matter are enhanced with one or more alternate viewpoints. A controller receives data indicative of a point on the two-dimensional perspective and accesses a three-dimensional location based on the point. The controller selects an image bubble based on the three-dimensional location. The three-dimensional location may be determined according to a depth map corresponding to the point. A portion of the image bubble is extracted and incorporated into the two-dimensional perspective. The resulting image may be a seamless enhanced resolution image or include a picture-in-picture enhanced resolution window including subject matter surrounding the selected point.Type: ApplicationFiled: January 18, 2016Publication date: May 12, 2016Inventor: James D. Lynch
-
Publication number: 20160133045Abstract: In accordance with some embodiments, a zero coverage test may determine whether a primitive such as a triangle relies on lanes between rows or columns or lines of samples. If so, the primitive can be culled in a zero coverage culling test.Type: ApplicationFiled: November 6, 2014Publication date: May 12, 2016Inventors: Tomas G. Akenine-Moller, Jon N. Hasselgren, Carl J. Munkberg
-
Publication number: 20160133046Abstract: An image processing apparatus includes an acquisition unit configured to acquire a bitmap image in which each of contained pixels has an alpha value indicating opacity of this pixel, and a rendering unit configured to render the bitmap image. The rendering unit is configured to, when rendering the bitmap image, refrain from performing alpha blending on a pixel contained in the bitmap image that has a specific alpha value and perform the alpha blending on a pixel contained in the bitmap image that has a different alpha value from the specific alpha value.Type: ApplicationFiled: November 10, 2015Publication date: May 12, 2016Inventor: Hirokazu Tokumoto
-
Publication number: 20160133047Abstract: Systems and methods for simulating illumination patterns on target surfaces in a space are disclosed. The system includes an input component and a simulation component. The input component receives a sampling angular range, a sampling polygon density, and a sampling polygon type. The simulation component traces sampling rays according to the sampling angular range and the sampling polygon density and type within a sampling range. The simulation component can further (1) generate an initial illumination pattern with a plurality of sampling polygon projections on the target surface; (2) assign the same value of an attribute in the sampling polygon projections defined by sampling rays through substantially the same route from the light source to the target surface; and (3) adjust the value of the attribute in the sampling polygon projection defined by sampling rays from different routes by interpolation.Type: ApplicationFiled: November 3, 2015Publication date: May 12, 2016Inventor: Ken Moore
-
Publication number: 20160133048Abstract: A computer-implemented method for creating an image that depicts shadowing for a specified light source even though the input data is not three-dimensional and is limited to elevation data that associates an elevation value with each of a plurality of spatial coordinates. Plumb line walls are generated between elevation points of neighboring grid cells for each elevation point meeting a specified delta elevation criterion. A shadow map is accumulated based on visibility of each pixel to the light source position, and then, in a subsequent pass through the coordinate pixels of the data, an image is created in a tangible medium with each pixel correspondingly visible or shadowed, either totally or partially. Values along one dimension may be spread over a Z-buffer range to optimally resolve visibility features.Type: ApplicationFiled: November 11, 2014Publication date: May 12, 2016Inventor: Elaine S. Acree
-
Publication number: 20160133049Abstract: Methods, systems, and apparatus, including medium-encoded computer program products, for a generative modeling framework for deferred geometry generation include, in one aspect, a method including: obtaining input to define a boundary of a 3D envelope for a 3D model of an object, wherein the 3D model uses one or more boundary representations to define the object in the 3D model; identifying a geometry type for the 3D envelope, wherein the geometry type has an associated 3D geometry used to create geometry details for the 3D envelope within the 3D model; manipulating the 3D model in response to input that changes at least one aspect of the 3D envelope; and rendering the 3D model on a display screen, including rendering a simplified representation of the 3D geometry within the changed 3D envelope. In addition, the method can include later generation of surface elements defining the geometry details.Type: ApplicationFiled: November 12, 2014Publication date: May 12, 2016Inventors: Kenneth Jamieson Hill, Patricia Anne Vrobel
-
Publication number: 20160133050Abstract: To make it possible to generate slice data without the need to modify a polygon mesh that does not satisfy conditions of a perfect solid model. A slice data generator for generating slice data representing a cross section cut from a three-dimensional modeled object, wherein the slice data generator has: changing means for changing topology information of a polygon mesh so that a contour polyline is obtained indicating a contour line of a cut cross section of the polygon mesh; and modifying means for acquiring the contour polyline from the polygon mesh, the topology information of the polygon mesh having been changed by the changing means, and modifying the contour polyline so that an inside which is a region inside the acquired contour polyline can be normally filled; slice data being generated on the basis of the contour polyline modified by the modifying means.Type: ApplicationFiled: December 19, 2014Publication date: May 12, 2016Applicant: ROLAND DG CORPORATIONInventors: Takayuki Sakurai, Yasutoshi Nakamura
-
Publication number: 20160133051Abstract: A head mounted display device includes an image display portion that transmits external scenery and displays an image so as to be capable of being visually recognized together with the external scenery. In addition, the head mounted display device includes a control unit that acquires an external scenery image including the external scenery which is visually recognized through the image display portion, recognizes an object which is visually recognized through the image display portion on the basis of the acquired external scenery image, and displays information regarding the object on the image display portion.Type: ApplicationFiled: October 15, 2015Publication date: May 12, 2016Inventors: Masashi AONUMA, Masahide TAKANO, Kiichi HIRANO
-
Publication number: 20160133052Abstract: An electronic device providing information through a virtual environment is disclosed. The device includes: a display; and an information providing module functionally connected with the display, wherein the information providing module displays an object corresponding to an external electronic device for the electronic device through the display, obtains information to be output through the external electronic device, and provides contents corresponding to the information in relation to a region, on which the object is displayed.Type: ApplicationFiled: November 6, 2015Publication date: May 12, 2016Inventors: Woosung CHOI, Hyuk KANG, Minji KIM, Dongil SON, Buseop JUNG, Jongho CHOI, Jooman HAN
-
Publication number: 20160133053Abstract: Introduced herein are various techniques for displaying virtual and augmented reality content via a head-mounted display (HMD). The techniques can be used to improve the effectiveness of the HMD, as well as the general experience and comfort of users of the HMD. A binocular HMD system may present visual stabilizers to each eye that allow users to more easily fuse the digital content seen by each eye. In some embodiments the visual stabilizers are positioned within the digital content so that they converge to a shared location when viewed by a user, while in other embodiments the visual stabilizers are mapped to different locations within the user's field of view (e.g., peripheral areas) and are visually distinct from one another. These techniques allow the user to more easily fuse the digital content, thereby decreasing the eye fatigue and strain typically experienced when viewing virtual or augmented reality content.Type: ApplicationFiled: November 9, 2015Publication date: May 12, 2016Inventor: Sina FATEH
-
Publication number: 20160133054Abstract: To appropriately superimpose and display a virtual object on an image of a real space, an information processing apparatus according to exemplary embodiment of the present invention determines the display position of the virtual object based on information indicating an allowable degree of superimposition of a virtual object on each real object in the image of the real space, and a distance from a real object for which a virtual object is to be displayed in association with the real object.Type: ApplicationFiled: November 9, 2015Publication date: May 12, 2016Inventors: Tomoya Honjo, Masakazu Matsugu, Yasuhiro Komori, Yoshinori Ito, Hideo Noro, Akira Ohno
-
Publication number: 20160133055Abstract: Introduced herein are various techniques for displaying virtual and augmented reality content via a head-mounted display (HMD). The techniques can be used to improve the effectiveness of the HMD, as well as the general experience and comfort of users of the HMD. An HMD may increase and/or decrease the resolution of certain areas in digital content that is being viewed to more accurately mimic a user's high resolution and low resolution fields of view. For example, the HMD may monitor the user's eye movement to identify a focal point of the user's gaze, and then increase the resolution in an area surrounding the focal point, decrease the resolution elsewhere, or both. Predictive algorithms could also be employed to identify which areas are likely to be the subject of the user's gaze in the future, which allows the HMD to present the regionally-focused content in real-time.Type: ApplicationFiled: November 9, 2015Publication date: May 12, 2016Inventor: Sina FATEH
-
Publication number: 20160133056Abstract: An interactive mixed reality simulator is provided that includes a virtual 3D model of internal or hidden features of an object; a physical model or object being interacted with; and a tracked instrument used to interact with the physical object. The tracked instrument can be used to simulate or visualize interactions with internal features of the physical object represented by the physical model. In certain embodiments, one or more of the internal features can be present in the physical model. In another embodiment, some internal features do not have a physical presence within the physical model.Type: ApplicationFiled: December 28, 2015Publication date: May 12, 2016Inventors: Samsun LAMPOTANG, Nikolaus GRAVENSTEIN, David Erik LIZDAS, Isaac Thomas LURIA, Matthew James PETERSON
-
Publication number: 20160133057Abstract: An information processing system that acquires video data captured by an image pickup unit; detects an object from the video data; detects a condition corresponding to the image pickup unit; and controls a display to display content associated with the object at a position other than a detected position of the object based on the condition corresponding to the image pickup unit.Type: ApplicationFiled: January 13, 2016Publication date: May 12, 2016Applicant: Sony CorporationInventors: Akihiko Kaino, Masaki Fukuchi, Tatsuki Kashitani, Kenichiro Ooi, Jingjing Guo
-
Publication number: 20160133058Abstract: An information processing system that acquires video data captured by an image pickup unit; detects an object from the video data; detects a condition corresponding to the image pickup unit; and controls a display to display content associated with the object at a position other than a detected position of the object based on the condition corresponding to the image pickup unit.Type: ApplicationFiled: January 13, 2016Publication date: May 12, 2016Applicant: Sony CorporationInventors: Akihiko KAINO, Masaki FUKUCHI, Tatsuki KASHITANI, Kenichiro OOI, Jingjing GUO
-
Publication number: 20160133059Abstract: A leader line arrangement position determining apparatus includes a receiving unit, a determining unit, and an arranging unit. The receiving unit receives designation of a position of a viewpoint to display an object in a three-dimensional CAD space in which the object is arranged. The determining unit, when the three-dimensional CAD space in which the object is arranged is displayed from the designated viewpoint, determines a shape of the object displayed when viewed from the viewpoint. The arranging unit arranges one end of a leader line at a position determined from the determined shape of the object.Type: ApplicationFiled: October 29, 2015Publication date: May 12, 2016Applicant: FUJITSU LIMITEDInventors: Masahiko Yamada, Terutoshi Taguchi, Shou SUZUKI