Patents Issued in February 14, 2019
  • Publication number: 20190050996
    Abstract: Methods, apparatus, systems and articles of manufacture to generate temporal representations for action recognition systems are disclosed. An example apparatus includes an optical flow computer to compute a first optical flow based on first and second video frames separated by a first amount of time and compute a second optical flow based on third and fourth video frames separated by a second amount of time, the second amount of time different than the first amount of time, and an aggregator to combine the first optical flow and the second optical flow to form an image representing action in a video.
    Type: Application
    Filed: July 27, 2018
    Publication date: February 14, 2019
    Inventors: Sherine Abdelhak, Neelay Pandit
  • Publication number: 20190050997
    Abstract: A visual odometry device, including: an image sensor configured to provide a first image and a second image; a visual feature extractor configured to extract at least three visual features corresponding to each of the first image and the second image; and a position determiner, configured to determine a change of a position of the at least three visual features between the first image and the second image, and to determine a degree of translation of the visual odometry device based on the determined change of position.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 14, 2019
    Inventors: Kay-Ulrich Scholl, Koba Natroshvili
  • Publication number: 20190050998
    Abstract: A method and system for acquiring dense 3D depth maps and scene flow using a plurality of image sensors, each image sensor associated with an optical flow processor, the optical flow fields being aligned to find dense image correspondences. The disparity and/or ratio of detected optical flows in corresponding pixels combined with the parameters of the two optical paths and the baseline between the image sensors is used to compute dense depth maps and scene flow.
    Type: Application
    Filed: August 10, 2017
    Publication date: February 14, 2019
    Inventor: Richard Kirby
  • Publication number: 20190050999
    Abstract: A method and system for 3D/3D medical image registration. A digitally reconstructed radiograph (DRR) is rendered from a 3D medical volume based on current transformation parameters. A trained multi-agent deep neural network (DNN) is applied to a plurality of regions of interest (ROIs) in the DRR and a 2D medical image. The trained multi-agent DNN applies a respective agent to each ROI to calculate a respective set of action-values from each ROI. A maximum action-value and a proposed action associated with the maximum action value are determined for each agent. A subset of agents is selected based on the maximum action-values determined for the agents. The proposed actions determined for the selected subset of agents are aggregated to determine an optimal adjustment to the transformation parameters and the transformation parameters are adjusted by the determined optimal adjustment.
    Type: Application
    Filed: August 14, 2018
    Publication date: February 14, 2019
    Inventors: Sébastien Piat, Shun Miao, Rui Liao, Tommaso Mansi, Jiannan Zheng
  • Publication number: 20190051000
    Abstract: Aspects of the disclosure generally relate to determining the location and orientation of panoramic images by a computing apparatus. One or more computing devices may receive alignment data between a first panoramic image and second panoramic image and original location data for the first panoramic image and the second panoramic image. The one or more computing devices may determine relative orientations between the pair of panoramic images based on the alignment data and calculate a heading from the first panoramic image to the second panoramic image based on the original location data. The location data and alignment data may be optimized by the one or more computing devices based on the relative orientations between the pair of panoramic images and the original location data. The one or more computing devices may replace the original location data and relative orientations with the optimized relative orientations and optimized location data.
    Type: Application
    Filed: October 28, 2016
    Publication date: February 14, 2019
    Inventors: Alan Sheridan, Charles Armstrong
  • Publication number: 20190051001
    Abstract: Aspects of the disclosure generally relate to connecting panoramic images. One or more computing devices may load and display a first panoramic image captured at a first location and receive a selection of an area on the first panoramic image, the area corresponding to where a connection to other panoramic images may be made. The one or more computing devices may identify and display one or more nearby panoramic images which were captured near the first location and receive a selection of one of the one or more nearby panoramic images. The one or more computing devices may display the selected panoramic image and the first panoramic image and align the selected panoramic image with the first panoramic image such that the selected panoramic image is oriented in the same direction as the first panoramic image. The one or more computing devices may connect the selected panoramic image with the first panoramic image.
    Type: Application
    Filed: October 27, 2016
    Publication date: February 14, 2019
    Inventors: Alan Sheridan, Scott Benjamin Satkin, Vivek Verma
  • Publication number: 20190051002
    Abstract: The present invention relates to a method and a device for medical imaging of coronary vessels, the device comprising: a data extracting module configured to extract a first vessel map from computed tomography angiography data covering at least one reference cardiac phase and a set of second vessel maps from three-dimensional rotational angiography data covering at least one cardiac cycle; an interpolation module configured to generate a series of warped versions of the first vessel map aligned with the set of second vessel maps, the series starting at the at least one reference cardiac phase; and a merging module configured to merge the series and the set of second vessel maps at the different phases in order to generate a final imaging map of the coronary vessels.
    Type: Application
    Filed: October 18, 2018
    Publication date: February 14, 2019
    Inventors: Vincent Maurice André AUVRAY, Raoul FLORENT, Pierre Henri LELONG
  • Publication number: 20190051003
    Abstract: A system determines spatial locations of pixels of an image. The system includes a processor configured to: receive location data from devices located within a hotspot; generate a density map for the hotspot including density pixels associated with spatial locations defined by the location data, each density pixel having a value indicating an amount of location data received from an associated spatial location; match the density pixels of the density map to at least a portion of the pixels of the image; and determine spatial locations of the at least a portion of the pixels of the image based on the spatial locations of the matching density pixels of the density map. In some embodiments, the image and density map are converted to edge maps, and a convolution is applied to the edge maps to match the density map to the pixels of the image.
    Type: Application
    Filed: August 11, 2017
    Publication date: February 14, 2019
    Inventor: Damon Burgett
  • Publication number: 20190051004
    Abstract: Various embodiments can measure a distance of an object. For achieving this, a first light source and a second light source can be configured to emit first light and a second light toward the object to illuminate the object. The emission of the first light and second light can be configured such that the two lights converge at a first point and diverge at a second point. An optical sensor can be used to capture a first image of the object illuminated by the first light, and capture a second image of the object illuminated by the second light. An image difference between the first image and the second image of the object can be determined. The distance of the object with respect to the first point can then be determined based on the image difference and a distance difference between the first point and the second point.
    Type: Application
    Filed: August 13, 2017
    Publication date: February 14, 2019
    Inventors: Yi He, Bo Pi
  • Publication number: 20190051005
    Abstract: An image depth sensing method adapted to obtain depth information within a field of view by an image depth sensing apparatus is provided. The method includes the following steps: determining whether the field of view includes a distant object with a depth greater than a distance threshold; in response to determining that the field of view does not include the distant object, obtaining the depth information within the field of view according to a general mode; and in response to determining that the field of view includes the distant object, obtaining the depth information within the field of view according to an enhanced mode. A maximum depth which can be detected in the general mode is not greater than the distance threshold, and a maximum depth which can be detected in the enhanced mode is greater than the distance threshold. In addition, an image depth sensing apparatus is also provided.
    Type: Application
    Filed: November 1, 2017
    Publication date: February 14, 2019
    Applicant: Wistron Corporation
    Inventor: Yao-Tsung Chang
  • Publication number: 20190051006
    Abstract: Machine vision processing includes capturing 3D spatial data representing a field of view and including ranging measurements to various points within the field of view, applying a segmentation algorithm to the 3D spatial data to produce a segmentation assessment indicating a presence of individual objects within the field of view, wherein the segmentation algorithm is based on at least one adjustable parameter, and adjusting a value of the at least one adjustable parameter based on the ranging measurements. The segmentation assessment is based on application of the segmentation algorithm to the 3D spatial data, with different values of the at least one adjustable parameter value corresponding to different values of the ranging measurements of the various points within the field of view.
    Type: Application
    Filed: December 21, 2017
    Publication date: February 14, 2019
    Inventors: Rita Chattopadhyay, Monica Lucia Martinez-Canales, Vinod Sharma
  • Publication number: 20190051007
    Abstract: Methods and apparatus to reduce a depth map size for use in a collision avoidance system are described herein. Examples described herein may be implemented in an unmanned aerial vehicle. An example unmanned aerial vehicle includes a depth sensor to generate a first depth map. The first depth map includes a plurality of pixels having respective distance values. The unmanned aerial vehicle also includes a depth map modifier to divide the plurality of pixels into blocks of pixels and generate a second depth map having fewer pixels than the first depth map based on distance values of the pixels in the blocks of pixels. The unmanned aerial vehicle further includes a collision avoidance system to analyze the second depth map.
    Type: Application
    Filed: December 21, 2017
    Publication date: February 14, 2019
    Inventors: Daniel Pohl, Markus Achtelik
  • Publication number: 20190051008
    Abstract: This is a method of showing a visual measurement/circumference/width/length of parts/areas of a human body or object. A user takes a picture of his or her current body or object and then selects the area on the picture of the current body or object that they wish to measure. The user then views the circumference/width/length of the area/parts of a human body or object using a picture. Photograph of a human body object from a mobile camera or web camera feature.
    Type: Application
    Filed: August 14, 2017
    Publication date: February 14, 2019
    Inventor: Deborah Jane Barker
  • Publication number: 20190051009
    Abstract: Methods and apparatus for monitoring a customer premises, e.g., using video cameras are described. Abnormal conditions are detected and responded to in an automated manner. Objects of interest are identified and current positions of the objects are detected over time based on captured images of the monitored area. If a set of predetermined action conditions corresponding to an object are satisfied, the monitoring system takes an action to correct, e.g., automatically correct, the detected problem and/or the monitoring system generates an alert. Correcting the problem may and sometimes does include automatically shutting off a valve, e.g., a gas or water valve, and/or using a motor to close a door.
    Type: Application
    Filed: August 14, 2017
    Publication date: February 14, 2019
    Inventor: Mark Reimer
  • Publication number: 20190051010
    Abstract: The present invention discloses a spatial positioning device, and a positioning processing method and device. The spatial positioning device comprises a set of cameras arranged horizontally and a set of cameras arranged vertically, wherein each set comprises at least two cameras with the same parameters including an image resolution, a camera lens angle in the horizontal direction and a camera lens angle in the vertical direction; and the at least two cameras in the set of cameras arranged horizontally are aligned in the horizontal direction, and the at least two cameras in the set of cameras arranged vertically are aligned in the vertical direction. In the spatial positioning device provided by the present invention, as the sets of cameras are arranged in the different directions, it is possible to effectively reduce or even eliminate the number of blind spots in the process of image shooting in the single direction.
    Type: Application
    Filed: August 11, 2017
    Publication date: February 14, 2019
    Inventors: Jian Zhu, Xiangdong Zhang, Zhuo Chen, Zhiping Luo, Dong Yan
  • Publication number: 20190051011
    Abstract: Detection and analysis of a tangible component in a sample are implemented at lower cost. Provided is an analysis apparatus including a flow cell which includes a flow path for a sample, a branch section configured to cause light having passed through the flow path to branch at least to a first optical path and a second optical path, a first imaging section and a second imaging section configured to capture images of the sample in the flow path by using the light in the first optical path and the light in the second optical path, and a controller configured to process the captured images. The first imaging section and the second imaging section capture images that have the same angle of view but have different characteristics.
    Type: Application
    Filed: August 8, 2018
    Publication date: February 14, 2019
    Applicant: ARKRAY, Inc.
    Inventors: Shigeki MASUDA, Yukio WATANABE
  • Publication number: 20190051012
    Abstract: A method for locating a printing substrate moving on a conveyor surface. The method includes i) providing the printing substrate moving on a conveyor surface at a selectable speed and in a feed direction, ii) providing an illumination means configured to emit a light beam incident on the conveyor surface according to a predetermined angle, iii) acquiring a predetermined plurality of lines of the substrate, as a function of a line frequency defined as a function of an acquisition rate, iv) generating a primary image as a function of the predetermined plurality of lines, v) detecting, from the primary image, points representative of the substrate, and vi) calculating location coordinates of the substrate relative to the first predefined reference as a function of the plurality of representative points.
    Type: Application
    Filed: March 3, 2017
    Publication date: February 14, 2019
    Applicant: SYSTEM S.P.A
    Inventors: Simone GIARDINO, Federico CAVALLINI, Franco STEFANI, Matteo RUBBIANI, Giuliano PISTONI
  • Publication number: 20190051013
    Abstract: An approach is provided for an asymmetric evaluation of polygon similarity. The approach, for instance, involves receiving a first polygon representing an object depicted in an image. The approach also involves generating a transformation of the image comprising image elements whose values are based on a respective distance that each image element is from a nearest image element located on a first boundary of the first polygon. The approach further involves determining a subset of the plurality of image elements of the transformation that intersect with a second boundary of a second polygon. The approach further involves calculating a polygon similarity of the second polygon with respect the first polygon based on the values of the subset of image elements normalized to a length of the second boundary of the second polygon.
    Type: Application
    Filed: August 10, 2017
    Publication date: February 14, 2019
    Inventors: Richard KWANT, Anish MITTAL, David LAWLOR
  • Publication number: 20190051014
    Abstract: A camera is oriented at a workspace by comparing a three-dimensional model of the workspace to an image. A user provides an initial estimation of camera location. A feature of the three-dimensional model is projected onto the image. The feature of the three-dimensional model is compared to a corresponding feature in the image. A position and orientation of the camera are calculated by comparing the feature of the three-dimensional model the corresponding feature in the image.
    Type: Application
    Filed: August 14, 2017
    Publication date: February 14, 2019
    Inventors: Kent Kahle, Young Jin Lee
  • Publication number: 20190051015
    Abstract: In one example a management system for an autonomous vehicle, comprises a first image sensor to collect first image data in a first geographic region proximate the autonomous vehicle and a second image sensor to collect second image data in a second geographic region proximate the first geographic region and a controller communicatively coupled to the first image sensor and the second image sensor and comprising processing circuitry to collect the first image data from the first image sensor and second image data from the second image sensor, generate a first reliability index for the first image sensor and a second reliability index for the second image sensor, and determine a correlation between the first image data and the second image data. Other examples may be described.
    Type: Application
    Filed: January 12, 2018
    Publication date: February 14, 2019
    Applicant: Intel Corporation
    Inventors: David Gonzalez Aguirre, Omar Florez, Julio Zamora Esquivel, Mahesh Subedar, Javier Felip Leon, Rebecca Chierichetti, Andrea Johnson, Glen Anderson
  • Publication number: 20190051016
    Abstract: A method of image processing is provided. The method may include: determining a candidate tuple from at least two images that are taken at different times, wherein the candidate tuples are determined using at least odometry sensor information. The couple of subsequent images have been detected by a moving image sensor moved by a vehicle. The odometry sensor information is detected by a sensor moved by the vehicle. The method may further include classifying the candidate tuples into a static tuple or a dynamic tuple. The static tuple represents a static object within the couple of subsequent images, and the dynamic tuple represents a moving object within the couple of subsequent images.
    Type: Application
    Filed: December 27, 2017
    Publication date: February 14, 2019
    Inventors: Koba NATROSHVILI, Okan KÖSE
  • Publication number: 20190051017
    Abstract: Methods and apparatus relating to image-based compression of Light Detection And Ranging (LIDAR) sensor data with point re-ordering are described. In an embodiment, logic receives distance sensor data and converts the received distance sensor data to point cloud data. The point cloud data corresponds to a set of points in a three dimensional (3D) space. The logic circuitry packs/organizes the converted point cloud data into one or more two dimensional (2D) arrays. Data stored in the one or more 2D arrays are compressed to generate a compressed version of the point cloud data. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: June 26, 2018
    Publication date: February 14, 2019
    Applicant: Intel Corporation
    Inventor: Petrus van Beek
  • Publication number: 20190051018
    Abstract: Provided is a method in which a web client receives a whole slide image (WSI) having an image compression format and a tile size different depending on a digital pathology vendor from a digital pathology server in a streaming manner. The method includes a WSI acquisition operation of acquiring the WSI from the digital pathology server. The WSI acquisition operation includes a normalized tile definition operation of defining a normalized tile having a minimized time cost, a determination operation of comparing a tile with the normalized tile, and a conversion operation of optimizing the tile with the normalized tile.
    Type: Application
    Filed: August 24, 2017
    Publication date: February 14, 2019
    Inventor: Man Won HWANG
  • Publication number: 20190051019
    Abstract: There is provided a display control device including an image acquiring section configured to acquire a moving image shot from a viewpoint changing from moment to moment, a spatial position specifying section configured to specify a spatial position in a first frame of the moving image, and a display control section configured to display the moving image, in such a manner to maintain the spatial position in a predetermined state in a second frame after the first frame.
    Type: Application
    Filed: October 10, 2018
    Publication date: February 14, 2019
    Applicant: SONY CORPORATION
    Inventors: Shunichi KASAHARA, Junichi REKIMOTO
  • Publication number: 20190051020
    Abstract: This disclosure describes examples for generating image content based on both a color value and a dither value that is to be applied. When a color value for the current pixel is the same as the color value for a previous pixel, and a dither value that is to be applied to the current pixel is the same as the dither value that was added to the previous pixel, a display processor may output the output color value for the previous pixel as the output color value for the current pixel.
    Type: Application
    Filed: August 14, 2017
    Publication date: February 14, 2019
    Inventors: Sreekanth Modaikkal, Anitha Madugiri Siddaraju
  • Publication number: 20190051021
    Abstract: An information processing apparatus includes an acquisition unit, an extraction unit, a receiving unit, and a specifying unit. The acquisition unit acquires an image. The extraction unit extracts a representative color which is a color representative of the image acquired by the acquisition unit. The receiving unit receives a designated word indicating emotion. The specifying unit specifies a color scheme to be applied to an image which is generated by a user, by using the word and the representative color.
    Type: Application
    Filed: March 26, 2018
    Publication date: February 14, 2019
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Qianru QIU, Kengo OMURA
  • Publication number: 20190051022
    Abstract: [Object] To reduce a risk that a tone of a specific color component of a subject to be observed in medical operation is defective.
    Type: Application
    Filed: January 10, 2017
    Publication date: February 14, 2019
    Applicant: SONY CORPORATION
    Inventors: Daisuke KIKUCHI, Takami MIZUKURA, Yasuaki TAKAHASHI, Koji KASHIMA
  • Publication number: 20190051023
    Abstract: A method and system for obtaining images of an object of interest using a system comprising an X-ray source facing a detector. The method and system enable the acquiring of a plurality of 2D projection images of the object of interest in a plurality of orientations. A selected 2D projection image such as the zero projection of the plurality of projections can be enhanced by using at least a subset of the plurality of tomosynthesis projection images. The obtained enhanced 2D projection image is displayed for review.
    Type: Application
    Filed: October 8, 2018
    Publication date: February 14, 2019
    Inventor: Sylvain Bernard
  • Publication number: 20190051024
    Abstract: A method of processing image data for a low processing device comprises collecting data, performing a statistical operation on the collected data with the first data collection section as a first processing unit, to obtain a plurality of first statistical data for the first data collection section, and performing the statistical operation on at least two of the plurality of first statistical data with a second data collection section as a second processing unit, to obtain at least a second statistical data for the second data collection section, wherein the second data collection section is the accumulation of at least two first data collection sections.
    Type: Application
    Filed: May 23, 2018
    Publication date: February 14, 2019
    Inventors: Andy Ho, Tsung-Han Yang, Szu-Chieh Wang, Jian-Chi Lin, JASON HSIAO
  • Publication number: 20190051025
    Abstract: A method for displaying a graphical trace on a display device comprises: (a) determining a number of data points, np, of the trace; (b) determining a number of data-display-device pixels, nx; (c) partitioning the abscissa variable into nx equal-width bins; (d) selecting, within each bin, three data points consisting of: a data point having the least value of the variable X, a different data point of the bin having the greatest value of the variable, Y and a yet different data point having the least value of the variable Y; and (e) displaying a graphical trace of the 3nx selected points using an existing display, printing or plotting algorithm. Alternatively, the step (c) comprises partitioning the abscissa variable into nr equal-width bins where nr=(nx/fr) and fr is a settable reduce factor greater than zero.
    Type: Application
    Filed: August 3, 2018
    Publication date: February 14, 2019
    Inventor: Ming LIU
  • Publication number: 20190051026
    Abstract: Examples described herein generally relate to rendering graphics in a computing device. A processing over-budget condition related to rendering a frame can be detected, based on which a value of a rendering parameter for a layer, where the layer is one of multiple layers to render for the frame can be modified. The layer can be rendered based at least in part on the value of the rendering parameter while one or more other layers of the multiple layers can be rendered based on respective values for the rendering parameter. The value of the rendering parameter for the layer can be different from at least one of the respective values of the rendering parameter for the one or more other layers.
    Type: Application
    Filed: August 11, 2017
    Publication date: February 14, 2019
    Inventors: Andrew Zicheng YEUNG, Jack Andrew ELLIOTT, Brent Michael WILSON, Michael George BOULTON
  • Publication number: 20190051027
    Abstract: One or more systems, devices, and/or methods for generating a map including path side data include storing path side data referenced to three-dimensional geographic coordinates. The path side data may be optical data or optical data modified based on one or more panoramic images. The path side data is combined with map data received from a map database. The map data includes nodes and segments. A processor rotates the path side data based on one of the segments. The rotation may be about the segment or about a featured identified in the optical data. The path side data overlaid on the map data is outputted to a display, a file, or another device.
    Type: Application
    Filed: January 9, 2017
    Publication date: February 14, 2019
    Inventor: James Lynch
  • Publication number: 20190051028
    Abstract: A processor-implemented method and system of this disclosure are configured to correct or resolve artifacts in a received digital image, using a selection of one or more encompassment measures, each encompassing the largest artifact and an optional second smaller artifact. The processor herein is configured to calculate difference in Gaussians using blurred versions of the input digital image and optionally, the input digital image itself; to composite the resulting difference in Gaussians and the input digital image; and to determine pixels with properties of invariant values, also referred to as invariant pixels. The values in the properties of the invariant pixels are then applied to the artifacts regions to correct these artifacts, thereby generating the modified digital image in which the artifact regions are more or less differentiable to a human eye when compared with the digital image.
    Type: Application
    Filed: January 24, 2018
    Publication date: February 14, 2019
    Inventor: David Sarma
  • Publication number: 20190051029
    Abstract: Provided are methods, systems, and devices for generating annotations in images that can include receiving image data including images associated with locations. The images can include key images comprising one or more key annotations located at one or more key annotation locations in the one or more key images. At least one image and a pair of the key images that satisfies one or more annotation criteria can be selected based in part on one or more spatial relationships of the plurality of locations associated with the images. An annotation location for an annotation in the image can be determined based in part on the one or more key annotation locations of the one or more key annotations in the pair of the key images that satisfies the one or more annotation criteria. An annotation can be generated at the annotation location of the image.
    Type: Application
    Filed: June 13, 2018
    Publication date: February 14, 2019
    Inventor: Joshua Sam Schpok
  • Publication number: 20190051030
    Abstract: An electronic device that is provided in a vehicle, including an interface unit configured to electrically connect to a first camera and a second camera; and a processor configured to receive, via the interface unit, a forward view image including an object from the first camera; receive, via the interface unit, information about the object from the second camera; convert the information about the object from a coordinate system of the second camera into a coordinate system of the first camera; generate an augmented reality (AR) graphic object corresponding to the object using the converted information; and display the AR graphic object overlaid on the forward view image.
    Type: Application
    Filed: August 8, 2018
    Publication date: February 14, 2019
    Applicant: LG ELECTRONICS INC.
    Inventors: Sunghwan CHOI, Ilwan KIM, Jaeho LEE
  • Publication number: 20190051031
    Abstract: A system that combines augmented reality (AR) technology with self-created elements to produce video works and a media storing the same are revealed. The system includes a data module used for storing drawing templates and scenes, a video input module that reads a hand-drawn image of a picture book and defines a hand-drawn border, a recognition and analysis module that compares the drawing template with the hand-drawn border to get drawn content, a voice input module that reads a speech to generate voice content, and an integration module that integrates the drawn content, the voice content and the scene for generating a self-created AR work. Thereby users can use the system to create AR video works with self-created elements in a real-time manner.
    Type: Application
    Filed: August 13, 2018
    Publication date: February 14, 2019
    Inventors: Pei-Wei CHYAU, Shih-Yun CHIU, Yu-Chun LIN
  • Publication number: 20190051032
    Abstract: A system for generating an animated life story of a person is shown. The system may capture an image of the person's face and generate a computer-animated simulation of the person's face. The computer-animated simulation of the person's face may be superimposed upon a computer-generated based on personal historical data of the person so that a computer-generated life story of the person from an earlier period of time to the present may be generated as a movie or slideshow.
    Type: Application
    Filed: February 24, 2017
    Publication date: February 14, 2019
    Inventors: Ting Chu, Jiancheng Xu
  • Publication number: 20190051033
    Abstract: The disclosure relates to systems and methods for differentiating for a user, closer objects from other objects by reducing the radiation reflection of objects beyond a certain distance, to below the detection limit of the sensor used to detect the reflection.
    Type: Application
    Filed: February 22, 2017
    Publication date: February 14, 2019
    Applicant: SUPERB REALITY LTD.
    Inventor: Eran Eilat
  • Publication number: 20190051034
    Abstract: Methods, apparatuses and systems directed to using viewport state data objects (VSDO) to render a series of video frames according to render instructions to achieve video compression. In a particular implementation, the video compression format exposes the VSDO and render instructions to a video render client, allowing the video render client to finish rendering a sequence of video frames from different spatial locations and view transform parameters.
    Type: Application
    Filed: October 16, 2018
    Publication date: February 14, 2019
    Inventor: Julian Michael URBACH
  • Publication number: 20190051035
    Abstract: There is provided with an image processing apparatus. An image obtaining unit obtains images acquired based on image capturing for a target area from a plurality of directions by a plurality of cameras. An information obtaining unit obtains viewpoint information indicating a position of a virtual viewpoint. A setting unit sets, based on a reference position within the target area and the viewpoint information obtained by the information obtaining unit, a parameter relating to a resolution of an object within the target area. A generating unit generates, based on the images obtained by the image obtaining unit and the viewpoint information obtained by the information obtaining unit, a virtual viewpoint image that includes an image of the object with the resolution according to the parameter set by the setting unit.
    Type: Application
    Filed: August 7, 2018
    Publication date: February 14, 2019
    Inventor: Tomohiro Nishiyama
  • Publication number: 20190051036
    Abstract: Provided is a three-dimensional reconstruction method of reconstructing a three-dimensional model from multi-view images. The method includes: selecting two frames from the multi-view images; calculating image information of each of the two frames; selecting a method of calculating corresponding keypoints in the two frames, according to the image information; and calculating the corresponding keypoints using the method of calculating corresponding keypoints selected in the selecting of the method of calculating corresponding keypoints.
    Type: Application
    Filed: October 17, 2018
    Publication date: February 14, 2019
    Inventors: Toru MATSUNOBU, Toshiyasu SUGIO, Satoshi YOSHIKAWA, Tatsuya KOYAMA, Pongsak LASANG, Jian GAO
  • Publication number: 20190051037
    Abstract: Two-dimensional compositing that preserves the curvatures of non-flat surfaces is disclosed. In some embodiments, a mapping is associated with a two-dimensional rendering that maps a potentially variable portion of the two-dimensional rendering to a canvas. The mapping is generated from a three-dimensional model of the potentially variable portion of the two-dimensional rendering. The potentially variable portion of the two-dimensional rendering is dynamically modified according to the mapping to reflect content comprising the canvas or edits received with respect to the canvas.
    Type: Application
    Filed: August 10, 2017
    Publication date: February 14, 2019
    Inventors: Clarence Chui, Christopher Murphy
  • Publication number: 20190051038
    Abstract: A lighting visualization system and methods for visualizing lighting scenarios for an object is provided. The system includes a graphic user interface for displaying a rendered image of the object, the rendered image representing a selected lighting scenario for the object. The system includes a control panel for indicating a value of parameters associated with the selected lighting scenario, each parameter being associated with at least one light source. The control panel includes a means for adjusting at least one parameter associated with at least one light source, thereby changing the selected lighting scenario. Upon changing the selected lighting scenario, the rendered image is modified and/or replaced. Each rendered image is rendered using a three-dimensional model of the object, one or more high-quality image of the object being utilized for creating the three-dimensional model.
    Type: Application
    Filed: August 9, 2018
    Publication date: February 14, 2019
    Inventors: Richard WELNOWSKI, Jay GARCIA, Benny LEE
  • Publication number: 20190051039
    Abstract: The present technology relates to an image processing apparatus, an image processing method, a program, and a surgical system, capable of appropriately providing medical image with shadow/shade. The image processing apparatus determines whether shadow/shade is to be added or suppressed onto a medical image and controls to generate a shadow/shade corrected image on the basis of a determination result. The present technology can be applied to, for example, a surgical system or the like of performing a surgery while viewing a medical image photographed by an endoscope.
    Type: Application
    Filed: February 10, 2017
    Publication date: February 14, 2019
    Applicant: SONY CORPORATION
    Inventors: Daisuke TSURU, Tsuneo HAYASHI, Yasuaki TAKAHASHI, Koji KASHIMA, Kenji IKEDA
  • Publication number: 20190051040
    Abstract: Techniques are described for efficiently generating terrain openness involve a digital elevation model comprising a texture representing a first geographic area and at least part of a plurality of mipmap levels representing geographic areas bordering the first geographic area. The texture and mipmap levels include pixels encoding elevation values for locations of geographic areas. For each pixel of the texture, derivatives are determined, as well as an openness factor based at least in part on the elevations at one or more pixels of the mipmap levels. The derivatives and openness factor are added to the texture. A hill shading factor is determined for each pixel based at least in part on the derivatives. An electronic map of the first geographic area is rendered using the openness and hill shading factors of each pixel of the texture. The rendering is sent for display.
    Type: Application
    Filed: August 8, 2018
    Publication date: February 14, 2019
    Inventors: Konstantin Friedrich Käfer, Ansis Brammanis
  • Publication number: 20190051041
    Abstract: A method and apparatus for automated projection mapping previsualization is provided. A computer model of an object is received, at a controller of a device, from a publicly accessible remote mapping server, the computer model comprising a publicly available three-dimensional computer model, the computer model defining an object in geographic coordinates and elevation coordinates, the object located at given geographic coordinates. The controller generates a time dependent previsualization projection mapping model for the object using images to be projected onto the object, the computer model, and data for generating one or more of Sun behavior and the Moon behavior at the given geographic coordinates. The controller controls a display device to render a previsualization of the time dependent previsualization projection mapping model.
    Type: Application
    Filed: August 8, 2017
    Publication date: February 14, 2019
    Inventors: Shawn David MILLS, Gerhard Dietrich KLASSEN, Ian Chadwyck FARAGHER
  • Publication number: 20190051042
    Abstract: A ceiling map building method includes estimating a scale of each ceiling image based on information related to the ceiling image and information related to another ceiling image including a same object as included in the ceiling image, the scale being represented as a ratio of an amount of movement of the object between the two ceiling images to an amount of movement of the camera (6) between the positions thereof when the two ceiling images were respectively captured (ST16), and building a ceiling map (2) (ST2) by converting the ceiling images in accordance with the respective scales so as to have sizes suitable for the ceiling map and combining the converted ceiling images (ST84).
    Type: Application
    Filed: August 6, 2018
    Publication date: February 14, 2019
    Inventors: Kaoru Toba, Soshi Iba, Yuji Hasegawa
  • Publication number: 20190051043
    Abstract: Described herein is a process and system for constructing three-dimensional (3D) representations of buildings. The 3D representations of buildings are semantic models that include roof nodes, roof edges, as well as roof faces with associated properties (e.g., pitch, azimuth). The system receives a 2D representation such as a roof outline including nodes connected by edges and associated data such as a height value, a pitch value, an independent structure, or a dependent structure. The system determines height values where a building structure changes. The system propagates a wavefront representing a cross-section of the building to the height values to generate 3D model edges. The 3D representation is generated based on the 3D model edges. The system can create 3D representations of buildings including roof structures of arbitrary complexity and can create representations of dependent roof structures such as dormers.
    Type: Application
    Filed: August 11, 2018
    Publication date: February 14, 2019
    Inventors: Christopher Hopper, Matthew Stevens
  • Publication number: 20190051044
    Abstract: Various techniques associated with automatic mesh generation are disclosed. One or more center curves of an outline of an object or figure are first determined. Next, for each of a plurality of points of each of the one or more center curves, a pair of rays is cast from a center curve in opposite directions, wherein the rays collide with opposite sides of the outline, and a collision pair is generated that comprises a line connecting collision points of the pair of rays on opposite sides of the outline. A mesh model of the object or figure is generated by mapping each of a set of collision pairs to polygons used to define the mesh model.
    Type: Application
    Filed: August 10, 2017
    Publication date: February 14, 2019
    Inventors: Clarence Chui, Christopher Murphy
  • Publication number: 20190051045
    Abstract: The image processing apparatus of the present invention is an image processing apparatus that performs processing relating to a three-dimensional shape model generated by using a plurality of images obtained by a plurality of cameras capturing an object, and the image processing apparatus includes: a specification unit configured to specify, based on position information on an object at a first time, a processing area relating to a three-dimensional shape model of the object at a second time later than the first time; and a processing unit configured to perform processing relating to the three-dimensional shape model of the object at the second time for the processing area specified by the specification unit.
    Type: Application
    Filed: August 1, 2018
    Publication date: February 14, 2019
    Inventor: Tomohiro Nishiyama