Patents Issued in August 24, 2017
  • Publication number: 20170243355
    Abstract: Method for providing obstacle avoidance using depth information of image is provided. The method includes the following steps. Shoot a scene to obtain a depth image of the scene. Determine a flight direction and a flight distance according to the depth image. Then, fly according to the flight direction and the flight distance.
    Type: Application
    Filed: March 28, 2016
    Publication date: August 24, 2017
    Inventors: Cheng-Yu Lin, Kuo-Feng Hsu
  • Publication number: 20170243356
    Abstract: Embodiments of the present invention are directed to optimizing image cropping. In accordance with some embodiments of the present invention, an image and an indication of an area of interest within the image are obtained. Thereafter, an amount to scale the image is determined based on a size of a container into which the image is to be placed for display. The amount to scale the image is greater for containers of a smaller size to focus on the area of interest within the image than the amount to scale the image for containers of a larger size. The image can be scaled in accordance with the determined amount to scale the image, and thereafter cropped to fit within the boundaries of the container.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 24, 2017
    Inventor: Johannes Andreas Eckert
  • Publication number: 20170243357
    Abstract: Computer-implemented methods, software, and computer systems for identifying a contact point between a pantograph of an electric vehicle and a power supply line represented in an image. The method includes, based on edges represented in the image, determining a first intersection point P1 and a second intersection point P2 that are sufficiently separated. The first intersection point P1 is formed by an intersection of a first edge Line with a top edge of the pantograph, and the second intersection point P2 is formed by an intersection of a second edge with the top edge of the pantograph. Then determining a first slope associated with the first edge, and a second slope associated with the second edge. Further, identifying the first intersection point P1 or the second intersection point P2 as the contact point between the pantograph and the power supply line by comparing the first slope and the second slope.
    Type: Application
    Filed: September 15, 2015
    Publication date: August 24, 2017
    Inventors: En Peng, William Hock Oon Lau, Brett Adams
  • Publication number: 20170243358
    Abstract: The present disclosure provides a detection system, which includes an image sensor, a lens device, and a processor. The image sensor is configured to take a first picture of a foreground object and a background object. The lens device is attached to the image sensor and configured to allow the foreground object to form a clear image on the first picture and the background object to form a blurred image on the first picture. The processor is configured to determine the image of the foreground object by analyzing the sharpness of the images of the first pictures.
    Type: Application
    Filed: May 9, 2017
    Publication date: August 24, 2017
    Inventor: En Feng HSU
  • Publication number: 20170243359
    Abstract: A system that analyzes data from multiple sensors, potentially of different types, that track motions of players, equipment, and projectiles such as balls. Data from different sensors is combined to generate integrated metrics for events and activities. Illustrative sensors may include inertial sensors, cameras, radars, and light gates. As an illustrative example, a video camera may track motion of a pitched baseball, and an inertial sensor may track motion of a bat; the system may use the combined data to analyze the effectiveness of the swing in hitting the pitch. The system may also use sensor data to automatically select or generate tags for an event; tags may represent for example activity types, players, performance levels, or scoring results. The system may analyze social media postings to confirm or augment event tags. Users may filter and analyze saved events based on the assigned tags.
    Type: Application
    Filed: May 9, 2017
    Publication date: August 24, 2017
    Applicant: Blast Motion Inc.
    Inventors: Bhaskar BOSE, Piyush GUPTA, Juergen HAAS, Brian ESTREM, Michael BENTLEY, Ryan KAPS
  • Publication number: 20170243360
    Abstract: Systems and methods according to one or more embodiments are provided for detecting an object in a field of view of an imaging device. An object may be detected by an imaging device when the object is present along a trajectory in a target scene. In one example, a system includes a memory component to store a plurality of images of the target scene and a processor. The processor is configured to define the trajectory between two locations within the target scene and extract a subset of pixel values from each of successive images corresponding to the trajectory. The extracted subsets of pixel values are processed to detect an object within the target scene. Additional systems and methods are also provided.
    Type: Application
    Filed: February 9, 2017
    Publication date: August 24, 2017
    Inventor: Stefan Schulte
  • Publication number: 20170243361
    Abstract: A method includes, following specification of an initial transformation as a test transformation that is to be optimized, determining a 2D gradient x-ray image and a 3D gradient dataset of the image dataset, carrying out, for each image element of the gradient comparison image, a check for selection as a contour point, and determining an environment best corresponding to a local environment of the contour point and extending around a comparison point in the gradient x-ray image for all contour points in the at least one gradient comparison image. Local 2D displacement information is determined by comparing the contour points with the associated comparison points, and motion parameters of a 3D motion model describing a movement of the target region between the acquisition of the image dataset and the x-ray image are determined from the displacement information and a registration transformation describing the registration.
    Type: Application
    Filed: February 18, 2017
    Publication date: August 24, 2017
    Inventors: Roman Schaffert, Jian Wang, Anja Borsdorf
  • Publication number: 20170243362
    Abstract: A method for calculating a coating textures indicator can comprise receiving target coating texture variables from an image. The method can also comprise accessing a relative texture characteristic database that stores a set of texture characteristic relationships for a plurality of coatings. The method can further comprise calculating a correlation between the target coating texture variables and target coating texture variables associated with a compared coating. Based upon the calculated correlation, the method can comprise, calculating a set of relative texture characteristics for the target coating that indicate relative differences in texture between the target coating and the compared coating. Each of the relative texture characteristics can comprise an assessment over all angles of the target coating.
    Type: Application
    Filed: February 19, 2016
    Publication date: August 24, 2017
    Inventor: Penny Neisen
  • Publication number: 20170243363
    Abstract: A system is disclosed for auditing waste retrieved by a service vehicle. The system may have an optical sensor mountable onboard the service vehicle and configured to capture image data associated with the waste as the waste falls into the service vehicle during completion of a waste service activity. The system may also have at least one controller in communication with the optical sensor and being configured to generate at least one of an alert and a recommendation regarding a mix of the waste based on the image data.
    Type: Application
    Filed: July 11, 2016
    Publication date: August 24, 2017
    Applicant: Rubicon Global Holdings, LLC
    Inventor: Philip RODONI
  • Publication number: 20170243364
    Abstract: A method includes obtaining at least a first energy dependent spectral image volume and a second different energy dependent spectral image volume from reconstructed spectral image data. The method further includes generating a multi-dimensional spectral diagram that maps, for each voxel, a value of the first energy dependent spectral image volume to a corresponding value of the second energy dependent spectral image volume. The method further includes generating a set of spectral texture analysis weights from the multi-dimensional spectral diagram. The method further includes retrieving a set of texture analysis functions, which are generated as a function of voxel intensity and voxel gradient value from a co-occurrence matrix histogram. The method further includes generating a texture analysis map through a texture analysis of the reconstructed spectral image data with the set of texture analysis functions and the set of spectral texture analysis weights and visually presenting the texture analysis map.
    Type: Application
    Filed: October 30, 2015
    Publication date: August 24, 2017
    Inventor: Raz CARMI
  • Publication number: 20170243365
    Abstract: In a method and system of designing a stair lift rail assembly to be mounted on a three-dimensional structure, a light beam including an optical pattern on at least part of the structure is projected from a reference location relative to the structure. Light from the structure is detected. Image data of the structure are generated based on the detected light. The image data are processed to generate a set of map data of the structure, the set of map data representing a three-dimensional map of the structure. Spatial path of the stair lift rail and locations of support interfaces for the stair lift rail assembly in the three-dimensional map are determined. Design of the stair lift rail assembly is generated based on the spatial path of the stair lift rail and the locations of the support interfaces for the stair lift rail assembly.
    Type: Application
    Filed: July 23, 2015
    Publication date: August 24, 2017
    Inventor: Johannes Maria Antonius Nuijten
  • Publication number: 20170243366
    Abstract: A displacement detecting apparatus includes: a detector which detects displacement, which is spatial displacement over time, of each of a plurality of measurement points which have been set on an object, using a plurality of images of the object captured at a plurality of time points; an extractor which extracts characteristic displacement specific to the object, based on the displacement detected by the detector; and a calculator which calculates overall displacement indicating displacement of the entirety of the object, from the characteristic displacement extracted by the extractor.
    Type: Application
    Filed: November 16, 2016
    Publication date: August 24, 2017
    Inventor: Taro IMAGAWA
  • Publication number: 20170243367
    Abstract: An electronic device and an operating method thereof are provided. The electronic device includes a camera module, a memory module, and a processor operatively coupled with the camera module and the memory module. The processor acquires an image through the camera module, extracts distance information based on the acquired image, determines an image processing technique for an object based on the extracted distance information, applies the determined image processing technique to the acquired image to generate a new image, and displays the new image.
    Type: Application
    Filed: February 17, 2017
    Publication date: August 24, 2017
    Inventors: Wooyong LEE, Gyubong LEE, Hyoung Jin YOO, Inpyo LEE, Jonghoon WON
  • Publication number: 20170243368
    Abstract: A method for automatically generating a three-dimensional (3D) video of a scene by measuring and registering 3D coordinates at a first position and a second position of a 3D measuring device, the 3D video generated by combining two-dimensional images extracted at trajectory points along a trajectory path.
    Type: Application
    Filed: May 11, 2017
    Publication date: August 24, 2017
    Inventors: Reinhard Becker, Martin Ossig, Daniel Flohr, Daniel Pompe
  • Publication number: 20170243369
    Abstract: An object state identification method of identifying a position and a posture of an object by obtaining actual measured values, which are three-dimensional data, at points on a flat portion including an opening on a surface of the object by a sensor includes extracting a corresponding point value indicating the position of an edge of the opening based on a predetermined feature value, calculating a plane equation indicating a plane including the flat portion based on the corresponding point value to determine a virtual plane, extracting, as a stable point, an actual measured value that is in the virtual plane, and identifying the position, posture, etc., of the object based on the stable point.
    Type: Application
    Filed: February 23, 2017
    Publication date: August 24, 2017
    Inventors: Masaomi IIDA, Manabu HASHIMOTO, Shoichi TAKEI
  • Publication number: 20170243370
    Abstract: A system for pothole detection comprises an input interface configured to receive sensor data and a pothole detector configured to determine a pothole based at least in part on the sensor data using a model, wherein the model is used to classify sensor data; and store pothole data associated with the pothole, wherein the pothole data comprises a pothole video.
    Type: Application
    Filed: March 6, 2017
    Publication date: August 24, 2017
    Inventors: Brett Hoye, Stephen Krotosky
  • Publication number: 20170243371
    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure facades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.
    Type: Application
    Filed: May 10, 2017
    Publication date: August 24, 2017
    Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M. Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
  • Publication number: 20170243372
    Abstract: An object state identification method of identifying a position and a posture of an object by obtaining actual measured values at a plurality of points on a flat portion including an opening on a surface of the object by a sensor includes generating a model value group that is a set of model values indicating the positions of a plurality of points present on surfaces of a corner model, calculating a model feature value based on at least one model value included in the model value group, calculating an actual feature value by the same method used to calculate the model feature value, extracting a corresponding point value indicating a position of an edge of the opening based on the actual feature value that matches the model feature value, and identifying the position and posture of the object based on the corresponding point value.
    Type: Application
    Filed: February 23, 2017
    Publication date: August 24, 2017
    Inventors: Masaomi IIDA, Manabu HASHIMOTO, Shoichi TAKEI
  • Publication number: 20170243373
    Abstract: An image capture system includes a plurality of image sensors arranged in a pattern such that gaps exist between adjacent image sensors of the plurality of image sensors. Each of the image sensors may be configured to capture sensor image data. The image capture system may also have a main lens configured to direct incoming light along an optical path, a microlens array positioned within the optical path, and a plurality of tapered fiber optic bundles. Each tapered fiber optic bundle may have a leading end positioned within the optical path, and a trailing end positioned proximate one of the image sensors. The leading end may have a larger cross-sectional area than the trailing end. Sensor data from the image sensors may be combined to generate a single light-field image that is substantially unaffected by the gaps.
    Type: Application
    Filed: March 7, 2017
    Publication date: August 24, 2017
    Inventors: Brendan Bevensee, Tingfang Du, Jon Karafin, Joel Merritt, Duane Petrovich, Gareth Spor
  • Publication number: 20170243374
    Abstract: A calibration device for an optical device including a two-dimensional image conversion element having a plurality of pixels and an optical system that forms an image-formation relationship between the image conversion element and the three-dimensional world coordinate space. The calibration device includes: a calibration-data acquisition unit that acquires calibration data representing the correspondence between two-dimensional pixel coordinates in the image conversion element and three-dimensional world coordinates in the world coordinate space; and a parameter calculating unit that calculates parameters of a camera model by applying, to the calibration data acquired by the calibration-data acquisition unit, a camera model in which two coordinate values of the three-dimensional world coordinates are expressed as functions of the other one coordinate value of the world coordinates and the two coordinate values of the two-dimensional pixel coordinates.
    Type: Application
    Filed: May 9, 2017
    Publication date: August 24, 2017
    Applicant: OLYMPUS CORPORATION
    Inventor: Toshiaki MATSUZAWA
  • Publication number: 20170243375
    Abstract: Techniques are described for using a texture unit to perform operations of a shader processor. Some operations of a shader processor are repeatedly executed until a condition is satisfied, and in each execution iteration, the shader processor accesses the texture unit. Techniques are described for the texture unit to perform such operations until the condition is satisfied.
    Type: Application
    Filed: February 18, 2016
    Publication date: August 24, 2017
    Inventors: Usame Ceylan, Vineet Goel, Juraj Obert, Liang Li
  • Publication number: 20170243376
    Abstract: Appearance transfer techniques are described in the following that maintain temporal coherence between frames. In one example, a previous frame of a target video is warped that occurs in the sequence of the target video before a particular frame being synthesized. Color of the particular frame is transferred from an appearance of a corresponding frame of a video exemplar. In a further example, emitter portions are identified and addressed to preserve temporal coherence. This is performed to reduce an influence of the emitter portion of the target region in the selection of patches.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 24, 2017
    Inventors: Ondrej Jamriska, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Publication number: 20170243377
    Abstract: The disclosure relates to a method for reconstructing a preview of a magnetic resonance examination, a magnetic resonance apparatus, and a computer program product. The method includes recording a first set of magnetic resonance data, from which a second set of magnetic resonance data is selected. Based on the second set of magnetic resonance data, a preview is reconstructed.
    Type: Application
    Filed: February 21, 2017
    Publication date: August 24, 2017
    Inventors: Simon Bauer, Wilhelm Horger, Ralf Kartäusch
  • Publication number: 20170243378
    Abstract: The present invention relates to a device (100) for iterative reconstruction of images recorded by at least two imaging methods, the device comprising: an extraction module (10), which is configured to extract a first set of patches from a first image recorded by a first imaging method and to extract a second set of patches from a second image recorded by a second imaging method; a generation module (20), which is configured to generate a set of reference patches based on a merging of a first limited number of atoms for the first set of patches and of a second limited number of atoms for the second set of patches; and a regularization module (30), which is configured to perform a regularization of the first image or the second image by means of the generated set of reference patches.
    Type: Application
    Filed: July 29, 2015
    Publication date: August 24, 2017
    Inventors: Thomas KOEHLER, Frank BERGNER, Roland PROKSA
  • Publication number: 20170243379
    Abstract: A tomographic image generation device includes a projection image acquisition section configured to acquire plural projection images obtained by radiating radiation onto a breast in sequence from plural radiation angles and by performing imaging at each of the plural radiation angles; a mammary gland density acquisition section configured to acquire a mammary gland density of the breast; a derivation section configured to derive a slice thickness that decreases as the mammary gland density acquired by the mammary gland density acquisition section increases; and a generation section configured to generate a tomographic image at the slice thickness derived by the derivation section based on the plural projection images acquired by the projection image acquisition section.
    Type: Application
    Filed: January 26, 2017
    Publication date: August 24, 2017
    Inventors: Takahisa ARAI, Naokazu KAMIYA, Takeyasu KOBAYASHI
  • Publication number: 20170243380
    Abstract: A computing system (116) includes a reconstruction processor (114) configured to execute computer readable instructions, which cause the reconstruction processor to: receive, in electronic format, non-spectral projection data, reconstruct the non-spectral projection data to generate a non-spectral image, retrieve a non-spectral to spectral voxel value map for a basis material of interest from a set of non-spectral to spectral voxel value maps, generate a spectral iterative reconstruction start image based on the non-spectral image and the non-spectral to spectral voxel value map, and reconstruct a spectral image, in electronic format, for the material basis of interest from the non-spectral projection data with a spectral iterative reconstruction algorithm and the spectral iterative reconstruction start image.
    Type: Application
    Filed: October 12, 2015
    Publication date: August 24, 2017
    Inventors: Roland PROKSA, Thomas KOEHLER
  • Publication number: 20170243381
    Abstract: The invention relates to a method for creating a virtual dental image from a 3D volume (1) comprising volumetric image data. Firstly, a sub-volume (8, 12, 15, 18) of the 3D volume (1) is defined and then a virtual projection image (30, 41) is generated for said sub-volume (8, 12, 15, 18) from a specific X-ray imaging direction (11) by computation of the volumetric image data in said X-ray imaging direction (11).
    Type: Application
    Filed: May 8, 2017
    Publication date: August 24, 2017
    Inventor: Johannes Ulrici
  • Publication number: 20170243382
    Abstract: Methods, systems and computer program products for generating hypergraph representations of dialog are provided herein. A computer-implemented method includes analyzing at least one dialog to identify one or more topics and one or more contributions by one or more persons to the one or more topics, tracking evolution of the identified topics over time in the at least one dialog, generating a hypergraph representation of the at least one dialog utilizing the identified topics, the identified contributions and the tracked evolution of the identified topics, and providing an interactive visualization tool based on the hypergraph representation of the at least one dialog.
    Type: Application
    Filed: February 19, 2016
    Publication date: August 24, 2017
    Inventors: Prithu Banerjee, Manikandan Padmanaban, Biplav Srivastava, Srikanth G. Tamilselvam
  • Publication number: 20170243383
    Abstract: Systems, devices, and methods for producing a three-dimensional visualization of a drill plan and drilling motor with a toolface are provided for drill steering purposes. A drilling motor with a toolface in communication with a sensor system is provided. A controller in communication with the sensor system is operable to generate a depiction of the drill plan and a depiction of the drilling motor, and to combine these depictions in a three-dimensional visualization of the downhole environment. This visualization is used by a user to steer the drill.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 24, 2017
    Inventors: Colin Gillan, Scott Gilbert Boone
  • Publication number: 20170243384
    Abstract: An image data processing system and associated methods for processing images and methods for image blending are provided. The method for processing panorama images in an image data processing system includes the steps of: receiving a plurality of source images from at least one image input interface, wherein the source images at least include overlapping portions; receiving browsing viewpoint and viewing angle information; determining cropped images of the source images based on the browsing viewpoint and viewing angle information; and generating a panorama image corresponding to the browsing viewpoint and viewing angle information for viewing or previewing based on the cropped images of the source images.
    Type: Application
    Filed: January 30, 2017
    Publication date: August 24, 2017
    Inventors: Yu-Hao HUANG, Tsui-Shan CHANG, Yi-Ting LIN, Tsu-Ming LIU, Kai-Min YANG
  • Publication number: 20170243385
    Abstract: The present disclosure relates to an apparatus and method for displaying information, a program, and a communication system, which enable the provision of an apparatus making use of a display device excellent in flexibility. An information display apparatus includes a display unit including a time information presenting section for presenting at least time information and a band section to be worn on an arm, and a display control unit for changing a display of the display unit. The present disclosure can be applied to, for example, the information display apparatus.
    Type: Application
    Filed: August 21, 2015
    Publication date: August 24, 2017
    Applicant: Sony Corporation
    Inventors: Masakazu Mitsugi, Yuki Sugiue, Hiroshi Saeki, Machiko Takematsu, Masaaki Yamamoto, Yoichi Ito, Kenji Itoh
  • Publication number: 20170243386
    Abstract: An image processing apparatus includes: an image acquisition unit configured to acquire a plurality of images of different fields of view, each of the plurality of images having a common area to share a common object with at least one other image of the plurality of images; a positional relation acquisition unit configured to acquire a positional relation between the plurality of images; an image composition unit configured to stitch the plurality of images based on the positional relation to generate a composite image; a shading component acquisition unit configured to acquire a shading component in each of the plurality of images; a correction gain calculation unit configured to calculate a correction gain based on the shading component and the positional relation; and an image correction unit configured to perform the shading correction on the composite image using the correction gain.
    Type: Application
    Filed: May 5, 2017
    Publication date: August 24, 2017
    Applicant: OLYMPUS CORPORATION
    Inventor: Shunichi KOGA
  • Publication number: 20170243387
    Abstract: There is disclosed a system and method for training a set of expression and neutral convolutional neural networks using a single performance mapped to a set of known phonemes and visemes in the form predetermined sentences and facial expressions. Then, subsequent training of the convolutional neural networks can occur using temporal data derived from audio data within the original performance mapped to a set of professionally-created three dimensional animations. Thereafter, with sufficient training, the expression and neutral convolutional neural networks can generate facial animations from facial image data in real-time without individual specific training.
    Type: Application
    Filed: February 21, 2017
    Publication date: August 24, 2017
    Inventors: Hao Li, Joseph J. Lim, Kyle Olszewski
  • Publication number: 20170243388
    Abstract: Appearance transfer techniques are described in the following. In one example, a search and vote process is configured to select patches from the image exemplar and then search for a location in the target image that is a best fit for the patches. As part of this selection, a patch usage counter may also be employed in an example to ensure that selection of each of the patches from the image exemplar does not vary by more than one, one to another. In another example, transfer of an appearance of a boundary and interiors regions from the image exemplar to a target image is preserved.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 24, 2017
    Inventors: Ondrej Jamriska, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Publication number: 20170243389
    Abstract: A device and a method for signaling a successful gesture input by means of a user. A user gesture is sensed and classified by a processing device. In response to the classified gesture, an animation may be generated that visually emphasizes a position that can change linearly at least in certain parts, along an edge of a screen assigned to the gesture input.
    Type: Application
    Filed: January 27, 2015
    Publication date: August 24, 2017
    Inventors: Holger WILD, Mark Peter CZELNIK
  • Publication number: 20170243390
    Abstract: In a computer graphics processing unit (GPU) having a shader and a texture unit the pixel shader is configured to receive or generate one or more sets of texture coordinates per pixel sample location. The pixel shader and texture unit between them are configured to calculate texture space gradient values for one or more primitives and generate and apply per-pixel gradient scale factors configured to modify the gradient values to smoothly transition them between regions of a display device having different pixel resolutions.
    Type: Application
    Filed: May 5, 2017
    Publication date: August 24, 2017
    Inventor: Mark Evan Cerny
  • Publication number: 20170243391
    Abstract: An image processing device and an image processing method that performs image interpretation as fast as possible, without oversight by an interactive operation, when performing the image interpretation of a large amount of volume data in which an interest region is set in advance are provided. The image processing device that generates a two-dimensional image from a captured three-dimensional image and displays the generated two-dimensional image receives an input signal relating to primary display control information including a speed of a display control input of the two-dimensional image, calculates the primary display control information from the input signal, calculates secondary display control information including a display speed of the two-dimensional image based on information of an interest region and the primary display control information, and sequentially generates the two-dimensional images based on the secondary display control information and displays the generated two-dimensional image.
    Type: Application
    Filed: June 5, 2015
    Publication date: August 24, 2017
    Inventors: Hanae YOSHIDA, Maki TANAKA
  • Publication number: 20170243392
    Abstract: An approach is provided for providing perspective-based content placement. A content placement platform processes and/or facilitates a processing of one or more models of one or more objects associated with a geographical area to cause, at least in part, a decomposition of the one or more models into one or more simplified surfaces. The content placement platform further causes, at least in part, a selection of one or more portions of the one or more simplified surfaces as one or more content placement layers based, at least in part, on one or more viewpoints, with the one or more content placement layers supporting a perspective-based rendering of one or more content items associated with the one or more objects.
    Type: Application
    Filed: May 8, 2017
    Publication date: August 24, 2017
    Inventors: Urban VELKAVRH, Mridul AANJANEYA, Timo Pekka PYLVÄNÄINEN, Radek GRZESZCZUK, Ramakrishna VEDANTHAM
  • Publication number: 20170243393
    Abstract: There is provided a scene rendering system and method for use by such a system to perform depth buffering for subsequent scene rendering. The system includes a memory storing a depth determination software including a reduced depth set identification software module, and a hardware processor configured to execute the depth determination software. The hardware processor is configured to execute the depth determination software to determine, before rendering a scene, a depth buffer based on at least one fixed depth identified for each element of a rendering framework for the scene. The hardware processor is further configured to render the scene using the depth buffer.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 24, 2017
    Inventors: Gregory Nichols, Brent Burley, Ralf Habel, David Adler
  • Publication number: 20170243394
    Abstract: The present invention causes a computer to function as a virtual space generating unit, a game screen displaying unit, a billboard setting unit, a data acquiring unit and a transmittance setting unit. The billboard setting unit sets a billboard which has a plain object and which rotates around a predetermined center point in the plain object so that the plain object faces the virtual camera in the virtual space. The data acquiring unit acquires drawing data of the object and two-dimensional-thickness map data, the two-dimensional thickness map data showing relationship between a two-dimensional coordinate on the object shown on the billboard and thickness information on each position coordinate of the object. The transmittance setting unit sets transmittance of light from a light source in the virtual space based on the thickness information.
    Type: Application
    Filed: February 23, 2017
    Publication date: August 24, 2017
    Applicant: CAPCOM CO., LTD.
    Inventors: Haruna AKUZAWA, Teppei YONEYAMA, Koshi WATAMURA
  • Publication number: 20170243395
    Abstract: In some embodiments, a given frame or picture may have different shading rates. In one embodiment in some areas of the frame or picture the shading rate may be less than once per pixel and in other places it may be once per pixel. Examples where the shading rate may be reduced include areas where there is motion and camera defocus, areas of peripheral blur, and in general, any case where the visibility is reduced anyway. The shading rate may be changed in a region, such as a shading quad, by changing the size of the region.
    Type: Application
    Filed: January 4, 2017
    Publication date: August 24, 2017
    Inventors: Karthik Vaidyanathan, Marco Salvi, Robert M. Toth
  • Publication number: 20170243396
    Abstract: A device and method for applying a virtual lighting effect in an electronic device are provided. The electronic device includes a display, a memory configured to store a first normal map and a second normal map corresponding to a face, and a processor. The processor is configured to acquire a first image and detect a face region in the first image. Additionally, the processor is configured to determine a normal map corresponding to at least a partial region of the face region based on the first normal map and the second normal map; and display a second image, based on the determined normal map, on the display. The second image includes the first image after a virtual lighting effect is applied thereto.
    Type: Application
    Filed: February 14, 2017
    Publication date: August 24, 2017
    Inventors: Chang Hoon Kim, Amit Kumar, Pradeep Choudhary, Sumedh Mannar
  • Publication number: 20170243397
    Abstract: Described herein are methods and systems for closed-form 3D model generation of non-rigid complex objects from scans with large holes. A computing device receives (i) a partial scan of a non-rigid complex object captured by a sensor coupled to the computing device; (ii) a partial 3D model corresponding to the object, and (iii) a whole 3D model corresponding to the object, wherein the partial 3D scan and the partial 3D model each includes one or more large holes. The device performs a rough match on the partial 3D model and changes the whole 3D model using the rough match to generate a deformed 3D model. The device refines the deformed 3D model using a deformation graph, reshapes the refined deformed 3D model to have greater detail, and adjusts the whole 3D model according to the reshaped 3D model to generate a closed-form 3D model that closes holes in the scan.
    Type: Application
    Filed: February 23, 2017
    Publication date: August 24, 2017
    Inventors: Xin Hou, Yasmin Jahir, Jun Yin
  • Publication number: 20170243398
    Abstract: Disclosed is a method for reducing volume of 3D modeling data, including: a first step of selecting a block object in the 3D modeling data, a second step of extracting from the 3D modeling data a target block to be comparable with the block object, a third step of comparing the block object with the target object, and a fourth step of designating the target object as a reference object if the block object and the target object are turned out to be identical with each other as a result of the comparing step.
    Type: Application
    Filed: October 14, 2015
    Publication date: August 24, 2017
    Inventors: Seong Do SON, Seung Yub KIM, Ju Hyeon GIM
  • Publication number: 20170243399
    Abstract: Methods for identifying parts of a target object (e.g., an airplane) using geotagged photographs captured on site by a hand-held imaging device. The geotagged photographs contain GPS location data and camera setting information. The embedded image metadata from two or more photographs is used to estimate the location (i.e., position and orientation) of the imaging device relative to the target object, which location is defined in the coordinate system of the target object. Once the coordinates of the area of interest on the target object are known, the part number and other information associated with the part can be determined when the imaging device viewpoint information is provided to a three-dimensional visualization environment that has access to three-dimensional models of the target object.
    Type: Application
    Filed: February 19, 2016
    Publication date: August 24, 2017
    Applicant: The Boeing Company
    Inventors: James J. Troy, Vladimir Karakusevic, Christopher D. Esposito
  • Publication number: 20170243400
    Abstract: Augmented reality systems and methods are disclosed which provide for representing imperceptible aspects of telecommunications networks as visual, auditory, tactile, or audiovisual stimuli. In some embodiments, the representation is a type of augmented reality from the perspective of a user on the ground, such as a technician deployed in the field.
    Type: Application
    Filed: February 17, 2017
    Publication date: August 24, 2017
    Inventor: Roger Ray Skidmore
  • Publication number: 20170243401
    Abstract: An information processing apparatus includes a viewpoint information acquisition unit that acquires viewpoint information concerning a viewer, a data input unit that inputs virtual space data, a specifying unit that specifies a virtual reference surface corresponding to a real reference surface in the virtual space data, a determination unit that determines positional relationship between the virtual reference surface and the real reference surface, a correction unit that corrects the virtual space data based on the positional relationship, and a generation unit that generates a display image based on the corrected virtual space data and the viewpoint information.
    Type: Application
    Filed: February 21, 2017
    Publication date: August 24, 2017
    Inventors: Yasumi Tanaka, Sho Matsuda
  • Publication number: 20170243402
    Abstract: Concepts and technologies are disclosed herein for virtual doorbell augmentations for communications between augmented reality and virtual reality environments. According to one aspect, an augmented reality server computer can provide an augmented reality environment to a user device. The augmented reality environment can include a view of a physical, real-world environment and a virtual doorbell augmentation applied to a residence depicted in the view of the physical, real-world environment. The augmented reality server computer can receive a selection of the virtual doorbell augmentation. In response to receiving the selection of the virtual doorbell augmentation, the augmented reality server computer can request access to a virtual reality environment provided by a virtual reality server computer associated with the residence.
    Type: Application
    Filed: May 8, 2017
    Publication date: August 24, 2017
    Applicant: AT&T Intellectual Property I, L.P.
    Inventor: Srilal M. Weerasinghe
  • Publication number: 20170243403
    Abstract: A method and system are provided for enabling a shared augmented reality experience. The system may include one or more onsite devices for generating AR representations of a real-world location, and one or more offsite devices for generating virtual AR representations of the real-world location. The AR representations may include data and/or content incorporated into live views of a real-world location. The virtual AR representations of the AR scene may incorporate images and data from a real-world location and include additional AR content. The onsite devices may synchronize content used to create the AR experience with the offsite devices in real time, such that the onsite AR representations and the offsite virtual AR representations are consistent with each other.
    Type: Application
    Filed: May 10, 2017
    Publication date: August 24, 2017
    Applicant: Bent Image Lab, LLC
    Inventors: Oliver Clayton DANIELS, Raymond Victor DI CARLO, David Morris DANIELS
  • Publication number: 20170243404
    Abstract: Disclosed systems and methods generate filtered, three-dimensional models with regard to a site. In particular, one or more embodiments include systems and methods that generate a filtered, three-dimensional model by removing one or more non-ground objects from a three-dimensional model of the site. Specifically, one or more embodiments of the disclosed systems and methods remove objects from a three-dimensional representation of a site by applying an initial filter to the three-dimensional representation, identifying regions corresponding to types of terrain within the three-dimensional representation, and applying another filter with parameters particular to the identified regions.
    Type: Application
    Filed: February 18, 2016
    Publication date: August 24, 2017
    Inventors: Leonardo Felipe Romo Morales, David Chen