Patents Issued in January 12, 2017
-
Publication number: 20170011509Abstract: Provided is a method of operating a medical imaging apparatus, comprising: acquiring a first image of a first type corresponding to a first respiratory state of an object; determining motion information of the object with respect to a respiratory state, based on first and second images of a second type respectively corresponding to the first respiratory state and a second respiratory state of the object; and generating a second image of the first type corresponding to the second respiratory state by applying the motion information to the first image of the first type.Type: ApplicationFiled: June 1, 2016Publication date: January 12, 2017Applicant: Samsung Medison Co., Ltd.Inventors: Ji-won RYU, Jae-il KIM, Won-chul BANG, Young-taek OH, Kyong-joon LEE, Jung-woo CHANG, Ja-yeon JEONG
-
Publication number: 20170011510Abstract: An image processor includes: a memory; and a processor, wherein the processor is configured to: extract an arc area in an image captured by an image sensor; and determine whether a portion that protrudes from an arc in the arc area satisfies a first reference regarding a shape and detect the portion as a colony candidate when it is determined that the portion satisfies the first reference.Type: ApplicationFiled: June 30, 2016Publication date: January 12, 2017Inventors: Susumu Endo, Masaki Ishihara, Masahiko Sugimura, Takayuki Baba, Yusuke Uehara, Akira Miyazaki, Hirohisa Naito, Hiroaki Takebe
-
Publication number: 20170011511Abstract: The following concerns a method for co-localization of microscopy or histology stains by the assembly of a virtual image from one or more imaging operations. In particular, the method decreases the time required to obtain multiple labeled antigen or protein histology images of a biological sample. The method includes imaging the tissue as it is sliced by a microtome with a knife edge scanning microscope and spatially aligning the samples by the generated images. The spatial alignment of samples enabled by the method allows a panel of different antigen or protein secondary or functional stains to be compared across different sample slices, thereby allowing concurrent secondary stains of tissues and cells.Type: ApplicationFiled: July 8, 2016Publication date: January 12, 2017Inventors: Matthew GOODMAN, Todd HUFFMAN, Cody DANIEL
-
Publication number: 20170011512Abstract: A nucleolus detection unit, which detects nucleoli in a plurality of cells in a cell image obtained by imaging the cells, and a cell recognition unit, which acquires information indicating a distance between the nucleoli and recognizes the individual cells based on the information indicating the distance, are provided.Type: ApplicationFiled: September 23, 2016Publication date: January 12, 2017Applicant: FUJIFILM CorporationInventor: Kenta MATSUBARA
-
Publication number: 20170011513Abstract: Apparatus and methods comprise examination of a subject using images of the subject. The images can provide a non-invasive analysis technique and can include a plurality of images of a portion of the subject at different times a temperature stimulus applied to the subject. An image of the portion of the subject can be aligned such that each pixel of the image corresponds to the same point on the subject over a sequence of images of the portion. The sequence of images can be processed after aligning the images such that data is extracted from the images. The extracted data can be used to make decisions regarding the health status of the subject. Additional apparatus, systems, and methods are disclosed.Type: ApplicationFiled: September 23, 2016Publication date: January 12, 2017Inventors: Sanjay Krishna, Sanchita Krishna, Majeed M. Hayat, Pradeep Sen, Maziar Yaesoubi, Sebastian Eugenio Godoy, Ajit Vijay Barve
-
Publication number: 20170011514Abstract: The invention provides methods and apparatus for image processing that perform image segmentation on data sets in two- and/or three-dimensions so as to resolve structures that have the same or similar grey values (and that would otherwise render with the same or similar intensity values) and that, thereby, facilitate visualization and processing of those data sets.Type: ApplicationFiled: September 26, 2016Publication date: January 12, 2017Applicant: PME IP PTY LTDInventors: MALTE WESTERHOFF, DETLEV STALLING, MARTIN SEEBASS
-
Publication number: 20170011515Abstract: The present invention relates to a method and apparatus for measuring an ultrasonic image. The method comprises: a measuring template loading step: loading a measuring template according to a received instruction; and a measuring template displaying step: displaying a selected measuring template at a designated position on the ultrasonic image.Type: ApplicationFiled: June 23, 2016Publication date: January 12, 2017Inventors: Gang Liu, Shujuan An, Yimeng Lin, Jiajiu Yang
-
Publication number: 20170011516Abstract: A medical imaging system configured to receive first image information corresponding with one or more images acquired at a first time, the one or more images including a lesion; receive second image information corresponding with one or more images of the lesion acquired at another time; render volumes of the lesion for each image; and overlays the two volumes. Other factors and/or indicators, such as vascularization indicators, may be calculated and compared between the first image information and second image information.Type: ApplicationFiled: February 5, 2015Publication date: January 12, 2017Inventors: Allen David SNOOK, Michael R. VION, Julia DMITRIEVA, Junzheng MAN
-
Publication number: 20170011517Abstract: The invention is a method for estimating the amount of analyte in a fluid sample, and in particular in a bodily fluid. The sample is mixed with a reagent able to form a color indicator in the presence of the analyte. The sample is then illuminated by a light beam produced by a light source; an image sensor forms an image of the beam transmitted by the sample, from which image a concentration of the analyte in the fluid is estimated. The method is intended to be implemented in compact analyzing systems. One targeted application is the determination of the glucose concentration in blood.Type: ApplicationFiled: July 7, 2016Publication date: January 12, 2017Applicants: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES, AVALUNInventors: Jean-Guillaume COUTARD, Patrick POUTEAU, Myriam Laure CUBIZOLLES
-
Publication number: 20170011518Abstract: The invention relates to a method for mapping the crystal orientations of a polycrystalline material, the method comprising: receiving (21) a series of images of the polycrystalline material, which images are acquired by an acquiring device in respective irradiation geometries; estimating (22) at least one intensity profile for at least one point of the material from the series of images, each intensity profile representing the intensity associated with the point in question as a function of irradiation geometry; and determining (24) a crystal orientation for each point in question of the material by comparing (23) the intensity profile associated with said point in question to theoretical signatures of intensity profiles of known crystal orientations, which signatures are contained in a database.Type: ApplicationFiled: January 23, 2015Publication date: January 12, 2017Inventor: Cyril LANGLOIS
-
Publication number: 20170011519Abstract: A detection area 112 is set in a three-dimensional space in which a subject exists. When an actual hand enters the detection area 112, coordinate points (white and black dots) represented by pixels making up a silhouette 114 of the hand in a depth image enter the detection area 112. In the detection area 112, a reference vector 126 is set that shows the direction which the hand should face relative to the shoulder as a reference point 122. Then, an inner product between two vectors, a vector from the reference point 122 to each of coordinate points and the reference vector 126, is calculated, followed by comparison between the inner products. Positions of coordinate points whose inner products are ranked high are acquired as the position of tips of the hand.Type: ApplicationFiled: December 1, 2014Publication date: January 12, 2017Applicant: Sony Interactive Entertainment Inc.Inventors: Akio Ohba, Hiroyuki Segawa, Tetsugo Inada, Hidehiko Ogasawara, Hirofumi Okamoto
-
Publication number: 20170011520Abstract: Disclosed examples include image processing methods and systems to process image data, including computing a plurality of scaled images according to input image data for a current image frame, computing feature vectors for locations of the individual scaled images, classifying the feature vectors to determine sets of detection windows, and grouping detection windows to identify objects in the current frame, where the grouping includes determining first clusters of the detection windows using non-maxima suppression grouping processing, determining positions and scores of second clusters using mean shift clustering according to the first clusters, and determining final clusters representing identified objects in the current image frame using non-maxima suppression grouping of the second clusters.Type: ApplicationFiled: July 8, 2016Publication date: January 12, 2017Applicant: Texas Instruments IncorporatedInventors: Manu Mathew, Soyeb Noormohammed Nagori, Shyam Jagannathan
-
Publication number: 20170011521Abstract: The present disclosure provides systems and methods for using two imaging modalities for imaging an object at two different resolutions. For example, the system may utilize a first modality (e.g., ultrasound or electromagnetic radiation) to generate image data at a first resolution. The system may then utilize the other modality to generate image data of portions of interest at a second resolution that is higher than the first resolution. In another embodiment, one imaging modality may be used to resolve an ambiguity, such as ghost images, in image data generated using another imaging modality.Type: ApplicationFiled: September 6, 2016Publication date: January 12, 2017Inventors: Jesse R. Cheatham, III, Roderick A Hyde, Muriel Y. Ishikawa, Jordin T. Kare, Craig J. Mundie, Nathan P. Myhrvold, Robert C. Petroski, Eric D. Rudder, Desney S. Tan, Clarence T. Tegreene, Charles Whitmer, Andrew Wilson, Jeannette M. Wing, Lowell L. Wood, JR., Victoria Y.H. Wood
-
Publication number: 20170011522Abstract: The invention relates to a method for determining an unknown position, i.e. height and/or orientation, of a light source within a locality. The determination is based on a first image of a scene within the locality acquired by a camera in such a manner as to contain a light footprint of light emitted by the light source from the unknown position. The method includes steps of processing the first image to determine one or more characteristics of the at least the portion of the light footprint within the first image, comparing the determined characteristics with one or more corresponding known characteristics of a light footprint of light emitted by the light source from a known position to determine a deviation between the determined and the known characteristics, and determining the unknown position of the light source based on the determined deviation.Type: ApplicationFiled: January 28, 2015Publication date: January 12, 2017Inventors: RUBEN RAJAGOPALAN, HARRY BROERS
-
Publication number: 20170011523Abstract: An image processing apparatus includes an acquisition unit, a first detection unit, a selection unit, and a correction unit. The acquisition unit acquires an image including a target object having a plurality of parts. The first detection unit detects a candidate region of each of the plurality of parts of the target object included in the acquired image using a previously learned model. The selection unit selects, based on the candidate region detected by the first detection unit, a first part having relatively high reliability and a second part having relatively low reliability from among the plurality of parts. The correction unit corrects the model by changing a position of the second part based on the first part selected by the selection unit.Type: ApplicationFiled: June 27, 2016Publication date: January 12, 2017Inventors: Koichi Magai, Masakazu Matsugu, Masato Aoba, Yasuo Katano, Takayuki Saruta
-
Publication number: 20170011524Abstract: A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.Type: ApplicationFiled: September 21, 2016Publication date: January 12, 2017Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
-
Publication number: 20170011525Abstract: An image capturing apparatus includes: a first camera module and a second camera module having different optical characteristics and configured to capture a same subject; and a controller configured to set a region including the subject as a first region of interest (ROI) in a first image captured by the first camera module and to detect a second ROI matching the first ROI in a second image captured by the second camera module, based on a difference in optical characteristics of the first camera module and optical characteristics of the second camera module.Type: ApplicationFiled: December 30, 2015Publication date: January 12, 2017Inventors: Il-do KIM, Woo-seok CHOI
-
Publication number: 20170011526Abstract: In a method and magnetic resonance apparatus for segmenting a balloon-type volume having an inner surface and an outer surface in an image data record, that is provided to a computer, the image data record at least partially mapping a balloon-type volume, the computer is provided with a starting area and determines a first boundary surface as an inner surface of the balloon-type volume. The computer is provided with a starting surface in the balloon-type volume, and determines a second boundary surface as an outer surface of the balloon-type volume on the basis of the starting surface. The balloon-type volume is determined in the computer as a volume within the first boundary surface and the second boundary surface.Type: ApplicationFiled: July 6, 2016Publication date: January 12, 2017Applicant: Siemens Healthcare GmbHInventors: Alexander Brost, Christoph Forman, Tanja Kurzendorfer
-
Publication number: 20170011527Abstract: There is provided an information processing apparatus to more accurately specify a type of event that is defined based on the sensor information and corresponds to the action of the user, the information processing apparatus including: a data acquiring section configured to acquire sensing data generated due to an action of a target; and an event specifying section configured to specify an event corresponding to the action based on a pattern shown in the sensing data and a context of the action. Provided is an information processing method, including: sensing an action of a target; transmitting sensing data acquired by the sensing; and performing, by a processor of an information processing apparatus that receives the sensing data, a process of specifying an event corresponding to the action based on a pattern shown in the sensing data and a context of the action.Type: ApplicationFiled: January 13, 2015Publication date: January 12, 2017Applicant: SONY CORPORATIONInventors: Hideyuki MATSUNAGA, Kosei YAMASHITA
-
Publication number: 20170011528Abstract: A method for controlling tracking using a color model is disclosed. The method includes obtaining a window in a second frame of a video image corresponding to a window in a first frame of the video image using a tracking algorithm in a tracking mode, wherein each pixel in the video image has at least one color component. The method further includes defining a background area around the window in the first frame, assigning a pixel confidence value for each pixel in the second frame according to a color model, assigning a window confidence value for the window in the second frame according to the pixel confidence values of pixels in the window in the second frame, if the window confidence value is greater than a first confidence threshold, selecting the tracking mode, and if the window confidence value is not greater than the first confidence threshold, selecting a mode different from the tracking mode.Type: ApplicationFiled: July 8, 2016Publication date: January 12, 2017Inventors: Fabrice URBAN, Lionel Oisel, Tomas Enrique Crivelli
-
Publication number: 20170011529Abstract: A video analysis system includes: a video data acquiring means that acquires video data; a moving object detecting means that detects a moving object from video data acquired by the video data acquiring means, by using a moving object detection parameter, which is a parameter for detecting a moving object; an environment information collecting means that collects environment information representing an external environment of a place where the video data acquiring means is installed; and a parameter changing means that changes the moving object detection parameter used when the moving object detecting means detects a moving object, on the basis of the environment information collected by the environment information collecting means.Type: ApplicationFiled: February 5, 2015Publication date: January 12, 2017Applicant: NEC CorporationInventor: Keiichi URASHITA
-
Publication number: 20170011530Abstract: Systems and methods for providing remote approval of an image for printing are provided. One system includes a processing circuit in communication with an image capturing device that is configured to capture an image of a printed product. The processing circuit is configured to process the captured image into a processed image accurate to within a tolerance in a color space to indicate the visual appearance of one or more colors. The color space is a standardized color space, such as sRGB or CIELAB. The processing circuit is further configured to transmit the processed image to a display located remote from the image capturing device and to receive an input signal from a remote input device to allow a user to approve or reject the displayed processed image for printing on a print device.Type: ApplicationFiled: September 23, 2016Publication date: January 12, 2017Inventors: Rick C. Honeck, Adam Nelson, Stephen J. Daily, Jon Ubert, John C. Seymour, Michael D. Sisco
-
Publication number: 20170011531Abstract: A palette compressed representation may be stored in the index bits, when that is possible. The savings are considerable in some embodiments. In uncompressed mode, the data uses 2304 (2048+256) bits, and in compressed mode, the data uses 1280 bits. However, with this technique, the data only uses the index bits, (e.g. 256 bits) with a 5:1 compression improvement over the already compressed representation, and with respect to the uncompressed representation it is a 9:1 compression ratio.Type: ApplicationFiled: September 7, 2016Publication date: January 12, 2017Inventor: Tomas G. Akenine-Moller
-
Publication number: 20170011532Abstract: Color values may be compressed using a palette based encoder. Clusters of color values may be identified and encoded color values within the cluster with respect to a color value having a predefined characteristic. Clusters that have pixels or samples with constant color value may also be encoded.Type: ApplicationFiled: September 7, 2016Publication date: January 12, 2017Inventors: Tomas G. Akenine-Moller, Jim K. Nilsson
-
Publication number: 20170011533Abstract: Optimal resilience to errors in packetized streaming 3-D wireframe animation is achieved by partitioning the stream into layers and applying unequal error correction coding to each layer independently to maintain the same overall bitrate. The unequal error protection scheme for each of the layers combined with error concealment at the receiver achieves graceful degradation of streamed animation at higher packet loss rates than approaches that do not account for subjective parameters such as visual smoothness.Type: ApplicationFiled: September 23, 2016Publication date: January 12, 2017Inventors: Joern Ostermann, Sokratis Varakliotis
-
Publication number: 20170011534Abstract: A method for generating a synthetic two-dimensional mammogram with enhanced contrast for structures of interest includes acquiring a three-dimensional digital breast tomosynthesis volume having a plurality of voxels. A three-dimensional relevance map that encodes for the voxels the relevance of the underlying structure for a diagnosis is generated. A synthetic two-dimensional mammogram is calculated based on the three-dimensional digital breast tomosynthesis volume and the three-dimensional relevance map.Type: ApplicationFiled: July 6, 2015Publication date: January 12, 2017Inventors: Maria Jimena Costa, Anna Jerebko, Michael Kelm, Olivier Pauly, Alexey Tsymbal
-
Publication number: 20170011535Abstract: A method, apparatus, and computer readable medium for removing an unwanted object from an image volume are provided. Volumetric data of an object of study is generated in a radiographic scan. Volumetric data of the unwanted object is obtained. The two sets of volumetric data are registered in a common coordinate system. The unwanted object is removed from the volumetric data of the object of study to create modified volumetric data of the object of study. Data from voxels surrounding the removed unwanted object may be used to populate voxels corresponding to the unwanted object with interpolated data. A plurality of forward projections are performed on the modified volumetric data of the object of study, and a tomogram with the unwanted object removed is constructed.Type: ApplicationFiled: July 9, 2015Publication date: January 12, 2017Inventors: Ciamak Abkai, Kai Lindenberg
-
Publication number: 20170011536Abstract: Methods, apparatuses, systems, and software for extended phase correction in phase sensitive Magnetic Resonance Imaging. A magnetic resonance image or images may be loaded into a memory. Two vector images A and B associated with the loaded image or images may be calculated either explicitly or implicitly so that a vector orientation by one of the two vector images at a pixel is substantially determined by a background or error phase at the pixel, and the vector orientation at the pixel by the other vector image is substantially different from that determined by the background or error phase at the pixel. A sequenced region growing phase correction algorithm may be applied to the vector images A and B to construct a new vector image V so that a vector orientation of V at each pixel is substantially determined by the background or error phase at the pixel.Type: ApplicationFiled: September 20, 2016Publication date: January 12, 2017Applicant: Board of Regents, The University of Texas SystemInventor: Jingfei MA
-
Publication number: 20170011537Abstract: A summary spline curve can be constructed from multiple animation spline curves. Control points for each of the animation spline curves can be included to form a combined set of control points for the summary spline curve. Each of the animation spline curves can then be divided into spline curve segments between each neighboring pair of control points in the combined set of control points. For each neighboring pair, the spline curve segments can be normalized and averaged to determine a summary spline curve segment. These summary spline curve segments are combined to determine a summary spline curve. The summary spline curve can then be displayed and/or modified. Modifications to the summary spline curve can result in modifications to the animation spline curves.Type: ApplicationFiled: July 10, 2015Publication date: January 12, 2017Inventor: Tom Hahn
-
Publication number: 20170011538Abstract: Methods, apparatus, systems, devices, and computer program products directed to augmenting reality with respect to real-world places, and/or real-world scenes that may include real-world places may be provided. Among the methods, apparatus, systems, devices, and computer program products is a method directed to augmenting reality via a device. The method may include capturing a real-world view that includes a real-world place, identifying the real-world place, determining an image associated with the real-world place familiar to a user of the device viewing the real-world view, and/or augmenting the real-world view that includes the real-world place with the image of the real-world place familiar to a user viewing the real-world view.Type: ApplicationFiled: January 24, 2015Publication date: January 12, 2017Applicant: PCMS Holdings, Inc.Inventor: Mona Singh
-
Publication number: 20170011539Abstract: An image processing apparatus and method can combine images, while maintaining continuity without causing discomfort, that capture the surrounding area of an automobile. An image processing apparatus includes an image acquisition unit that acquires first and second images respectively capturing surrounding areas of an automobile including first and second areas, an outline detector that performs outline detection on the first and second images and generates first and second outlines respectively, a determiner that determines whether the first and second outlines each include an outline of the same object, and a area selector that performs area setting and perspective conversion on the first or second image when the first and second outlines are determined to include an outline of the same object, so that in a combined image generated by combining at least the first and second images, the first and second outlines are continuous at the same object.Type: ApplicationFiled: February 26, 2015Publication date: January 12, 2017Applicant: KYOCERA CorporationInventors: Takeo OSHIMA, Takatoshi NAKATA
-
Publication number: 20170011540Abstract: A system, method, and computer program product for efficiently reconstructing a pattern, such as a fingerprint, from a set of multiple impressions of portions of that pattern. The system may evaluate images of patterns taken from a series of multiple impressions and reconstruct the pattern from the image portions while providing the operator with realtime feedback of a status of the set of images. As each new image portion is evaluated, a display graphic or other indicator provides feedback when a new image portion is added to the reconstruction image, or when a new image portion is not added (such as it representing a duplicate). Other status indications may be provided, and when the indication is visual, a degraded resolution of the pattern map may be provided on the display graphic to improve security.Type: ApplicationFiled: July 5, 2016Publication date: January 12, 2017Applicant: IDEX ASAInventors: Roger A. Bauchspies, Sigmund Clausen, Arne Herman Falch
-
Publication number: 20170011541Abstract: A method of creating animated content, which includes the steps of generating an element to be displayed on a mobile device, touch screen or desktop screen. The element includes details of at least one graphical resource together with the settings associated with the graphical resource. The method further includes the steps of adding scrolling and parallax-animation functionality to the generated element and generating computer code to create a parallax-animated display of the created content on the device.Type: ApplicationFiled: July 9, 2015Publication date: January 12, 2017Inventor: Shahar NAOR
-
Publication number: 20170011542Abstract: It is presented a method for improving performance of generation of digitally represented graphics. Said method comprises the steps of: selecting (440) a tile comprising fragments to process; executing (452) a culling program for the tile, the culling program being replaceable; and executing a set of instructions, selected from a plurality of sets of instructions based on an output value of the culling program, for each of a plurality of subsets of the fragments. A corresponding display adapter and computer program product are also presented.Type: ApplicationFiled: August 31, 2016Publication date: January 12, 2017Inventors: Tomas G. Akenine-Moller, Jon N. Hasselgren
-
Publication number: 20170011543Abstract: An importance map indicates, for each of a plurality of pixels, whether the pixel is considered important enough to be rendered. A hierarchical tree for pixels is created to generate a hierarchical importance map. The hierarchical importance map may be used to stop traversal of a primitive that does not overlap a pixel indicated to be important.Type: ApplicationFiled: September 10, 2016Publication date: January 12, 2017Inventors: Rasmus Barringer, Tomas G. Akenine-Moller
-
Publication number: 20170011544Abstract: In an aspect, an update unit can evaluate condition(s) in an update request and update one or more memory locations based on the condition evaluation. The update unit can operate atomically to determine whether to effect the update and to make the update. Updates can include one or more of incrementing and swapping values. An update request may specify one of a pre-determined set of update types. Some update types may be conditional and others unconditional. The update unit can be coupled to receive update requests from a plurality of computation units. The computation units may not have privileges to directly generate write requests to be effected on at least some of the locations in memory. The computation units can be fixed function circuitry operating on inputs received from programmable computation elements. The update unit may include a buffer to hold received update requests.Type: ApplicationFiled: September 26, 2016Publication date: January 12, 2017Inventors: Steven J. Clohset, Jason R. Redgrave, Luke T. Peterson
-
Publication number: 20170011545Abstract: Real-time light field reconstruction for defocus blur may be used to handle the case of simultaneous defocus and motion blur. By carefully introducing a few approximations, a very efficient sheared reconstruction filter is derived, which produces high quality images even for a very low number of input samples in some embodiments. The algorithm may be temporally robust, and is about two orders of magnitude faster than previous work, making it suitable for both real-time rendering and as a post-processing pass for high quality rendering in some embodiments.Type: ApplicationFiled: September 10, 2016Publication date: January 12, 2017Inventors: Carl J. Munkberg, Karthik Vaidyanathan, Jon N. Hasselgren, Franz P. Clarberg, Tomas G. Akenine-Moller, Marco Salvi
-
Publication number: 20170011546Abstract: Embodiments of the present disclosure are directed to methods and computer systems for converting datasets into three-dimensional (“3D”) mesh surface visualization, displaying the mesh surface on a computer display, comparing two three-dimensional mesh surface structures by blending two primary different primary colors to create a secondary color, and computing the distance between two three-dimensional mesh surface structures converted from two closely-matched datasets. For qualitative analysis, the system includes a three-dimensional structure comparison control engine that is configured to convert dataset with three-dimensional structure into three-dimensional surfaces with mesh surface visualization. The control engine is also configured to assign color and translucency value to the three-dimensional surface for the user to do qualitative comparison analysis. For quantitative analysis, the control engine is configured to compute the distance field between two closely-matched datasets.Type: ApplicationFiled: July 7, 2015Publication date: January 12, 2017Inventors: Janos ZATONYI, Marcin NOVOTNI, Patrik KUNZ
-
Publication number: 20170011547Abstract: One embodiment is directed to a system for presenting views of a very large point data set, comprising: a storage system comprising data representing a point cloud comprising a very large number of associated points; a controller operatively coupled to the storage cluster and configured to automatically and deterministically organize the point data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; and a user interface through which a user may select a viewing perspective origin and vector, which may be utilized to command the controller to assemble an image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy.Type: ApplicationFiled: September 19, 2016Publication date: January 12, 2017Applicant: Willow Garage Inc.Inventors: Stuart Glaser, Wim Meeussen, Eitan Marder-Eppstein
-
Publication number: 20170011548Abstract: A system and method processes data and implements geographic based queries to allow users to visualize 3-D representations or massings of a building considering various zoning parameters for a real estate parcel. The user can choose to output the resulting information in digital and/or print format and perform 3-D massing for any lot or combination of lots on a city block. Using stored and/or input data, the system calculates the viability of the property as a real estate development investment by calculating a discounted cash flow (DCF) and/or an internal rate of return (IRR) and/or other investment metric values.Type: ApplicationFiled: September 20, 2016Publication date: January 12, 2017Applicant: SHoP Architects PCInventors: Todd Michael Sigaty, Eugene Jerome Pasquarelli, Gregg Andrew Pasquarelli, Timothy Michael Martone, Sarah Elizabeth Williams
-
Publication number: 20170011549Abstract: There are provided systems and methods for performing object deformation modeling. One example system includes a hardware processor, a system memory, and a contact-based deformation modeling software stored in the system memory. The hardware processor is configured to execute the contact-based deformation modeling software to receive a first object geometric data corresponding to a first virtual object and a second object geometric data corresponding to at least a second virtual object, and to transform the first object geometric data by an n-dimensional mapping onto an object deformation space determined based on n primitive deformations. The hardware processor is also configured to execute the contact-based deformation modeling software to model a deformation of the first virtual object due to contact with at least the second virtual object, based on the n-dimensional mapping and a definition of direction for an object-to-object contact force.Type: ApplicationFiled: July 9, 2015Publication date: January 12, 2017Inventors: Dmitriy Pinskiy, Jose Luis Gomez Diaz, Nara Yun
-
Publication number: 20170011550Abstract: A rendering method executed by a graphics processing unit includes: loading a vertex shading command from a first command queue to a shader module; executing the vertex shading command for computing the varying of the vertices to perform a vertex shading operation by taking the vertices as first input data; storing first tessellation stage commands into a second command queue; loading the first tessellation stage commands to the shader module; and executing the first tessellation commands for computing first tessellation stage outputs to perform a first tessellation stage of the one or more tessellation stages by taking the varying of the vertices as second input data. The vertex shading command is stored into the first command queue by a first processing unit. The varying of the vertices and the first tessellation stage outputs are stored in a cache of the graphics processing unit.Type: ApplicationFiled: July 6, 2015Publication date: January 12, 2017Inventors: Pei-Kuei TSUNG, Shou-Jen LAI, Yan-Hong LU, Sung-Fang TSAI, Chien-Ping LU
-
Publication number: 20170011551Abstract: Provided is a new method which creates the virtual garment from a single photograph of a real garment put on to the mannequin. The method uses the pattern drafting theory in the clothing field. The drafting process is abstracted into a computer module, which takes the garment type and primary body sizes then produces the draft as the output. Then the problem is reduced to find out the garment type and primary body sizes. That information is found by analyzing the silhouette of the garment with respect to the mannequin. The method works robustly and produces practically usable virtual clothes that can be used for the graphical coordination.Type: ApplicationFiled: July 7, 2015Publication date: January 12, 2017Inventors: MoonHwan JEONG, Hyeong-Seok KO, Dong-Hoon Han
-
Publication number: 20170011552Abstract: One embodiment involves receiving a fine mesh as input, the fine mesh representing a 3-Dimensional (3D) model and comprising fine mesh polygons. The embodiment further involves identifying, based on the fine mesh, near-planar regions represented by a coarse mesh of coarse mesh polygons, at least one of the near-planar regions corresponding to a plurality of the coarse mesh polygons. The embodiment further involves determining a deformation to deform the coarse mesh based on comparing normals between adjacent coarse mesh polygons. The deformation may involve reducing a first angle between coarse mesh polygons adjacent to one another in a same near-planar region. The deformation may additionally or alternatively involve increasing an angle between coarse mesh polygons adjacent to one another in different near-planar regions. The fine mesh can be deformed using the determined deformation.Type: ApplicationFiled: September 20, 2016Publication date: January 12, 2017Inventors: Daniel Robert Goldman, Jan Jachnik, Linjie Luo
-
Publication number: 20170011553Abstract: A system for tracking a first electronic device, such as a handheld smartphone, in a virtual reality environment generated by a second electronic device, such as a head mounted display may include detection, by a camera included in one of the first electronic device or the second electronic device, of at least one visual marker included on the other of the first electronic device or the second electronic device. Features detected within the field of view corresponding to known features of the visual markers may be used to locate and track movement of the first electronic device relative to the second electronic device, so that movement of the second electronic device may be translated into an interaction in a virtual experience generated by the second electronic device.Type: ApplicationFiled: June 27, 2016Publication date: January 12, 2017Inventors: Shiqi Chen, Zhaoyang Xu, Alexander James Faaborg
-
Publication number: 20170011554Abstract: A system for dynamic spectating includes a first virtual environment display that displays a first perspective of a virtual environment to a spectator; a status overlay that displays information about an event occurring within the virtual environment; and a virtual camera manager that controls the position and orientation of the first perspective within the virtual environment.Type: ApplicationFiled: June 30, 2016Publication date: January 12, 2017Inventors: Nathan Burba, James IIiff
-
Publication number: 20170011555Abstract: A head-mounted display device includes an image display section configured to display an image, an imaging section, a memory section configured to store data of a marker image, an image setting section configured to cause the image display section to display an image based at least on the data, and a parameter setting section. The parameter setting section derives at least one of camera parameters of the imaging section and a spatial relationship, the spatial relationship being between the imaging section and the image display section, based at least on an image that is captured by the imaging section in the case where the imaging section acquires a captured image of a real marker.Type: ApplicationFiled: June 20, 2016Publication date: January 12, 2017Applicant: SEIKO EPSON CORPORATIONInventors: Jia LI, Guoyi FU
-
Publication number: 20170011556Abstract: The apparatus draws virtual objects as an image from a predetermined point of view, determines whether the virtual objects interfere with each other, calculates the region of interference of the virtual objects determined as interfering with each other, and outputs an image in which the region of interference located behind of the virtual objects as seen from the point of view is drawn.Type: ApplicationFiled: July 1, 2016Publication date: January 12, 2017Inventors: Masayuki Hayashi, Kazuki Takemoto
-
Publication number: 20170011557Abstract: In an embodiment, an electric device performs a method for outputting content. In this method, the electronic device detects a selection of content by a user and ascertains a reference factor corresponding to the content. Then the electronic device determines a display mode corresponding to the reference factor and outputs the content, based on the display mode. Other embodiments are possible.Type: ApplicationFiled: July 6, 2016Publication date: January 12, 2017Inventors: Olivia Lee, Seungmyung Lee, Jueun Lee, James Powderly, Jinmi Choi
-
Publication number: 20170011558Abstract: The invention relates to a method for representing a virtual object in a real environment, having the following steps: generating a two-dimensional representation of a real environment by means of a recording device, ascertaining a position of the recording device relative to at least one component of the real environment, segmenting at least one area of the real environment in the two-dimensional image on the basis of non-manually generated 3D information for identifying at least one segment of the real environment in distinction to a remaining part of the real environment while supplying corresponding segmentation data, and merging the two-dimensional image of the real environment with the virtual object or, by means of an optical, semitransparent element directly with reality with consideration of the segmentation data. The invention permits any collisions of virtual objects with real objects that occur upon merging with a real environment to be represented in a way largely close to reality.Type: ApplicationFiled: July 8, 2016Publication date: January 12, 2017Inventors: Peter Meier, Stefan Holzer