Patents Issued in September 20, 2018
  • Publication number: 20180268568
    Abstract: The present disclosure relates to mobile electronic devices including at least one transparent display screen for comparing and accurately determining the color of a predetermined object for color and texture application. In addition, the present disclosure provides color analysis and control using an electronic mobile device transparent display screen, for a wide variety of applications, including, but not limited to color, shade and coating defect identification applications, including augmented reality applications. Color data for a perceived color stores in a memory and displays images as perceived through the transparent display screen. Image difference values are determined between a first set of optical processing data and a second set of optical processing data. The transparent display screen indicates image difference values from including differences in color, texture, transparency, lighting, etc., especially for augmented reality applications.
    Type: Application
    Filed: June 28, 2017
    Publication date: September 20, 2018
    Inventor: Suk K. Kim-Whitty
  • Publication number: 20180268569
    Abstract: In a method and medical imaging apparatus for detecting abnormalities in medical image data of a region of the patient that is outside of a region to be examined of the patient, medical image data that depict a region of the patient that is outside of the region to be examined of the patient are provided in a computer, wherein the region to be examined of the patient has already been selected on the basis of preliminary examination data. The computer automatically evaluates the medical image data for the region that is outside of the region to be examined of the patient, and generates abnormality information of the medical image data for the region that is outside of the region to be examined of the patient. The abnormality information is visually presented.
    Type: Application
    Filed: March 14, 2018
    Publication date: September 20, 2018
    Applicant: Siemens Healthcare GmbH
    Inventor: Maria Kroell
  • Publication number: 20180268570
    Abstract: An decoding device, an encoding device and a method for point cloud encoding is disclosed. The method includes generating, from a three-dimensional point cloud, multiple two-dimensional frames, the two-dimensional frames including at least a first frame representing a geometry of points in the three-dimensional point cloud and a second frame representing texture of points in the three-dimensional point cloud. The method also includes generating an occupancy map indicating locations of pixels in the two-dimensional frames that represent points in the three-dimensional point cloud. The method further includes encoding the two-dimensional frames and the occupancy map to generate a compressed bitstream. The method also includes transmitting the compressed bitstream.
    Type: Application
    Filed: March 13, 2018
    Publication date: September 20, 2018
    Inventors: Madhukar Budagavi, Esmaeil Faramarzi, Tuan Ho
  • Publication number: 20180268571
    Abstract: Provided is an image compression device including an object extracting unit configured to perform convolution neural network (CNN) training and identify an object from an image received externally, a parameter adjusting unit configured to adjust a quantization parameter of a region in which the identified object is included in the image on the basis of the identified object, and an image compression unit configured to compress the image on the basis of the adjusted quantization parameter.
    Type: Application
    Filed: September 7, 2017
    Publication date: September 20, 2018
    Inventors: Seong Mo PARK, Sung Eun KIM, Ju-Yeob KIM, Jin Kyu KIM, Kwang Il OH, Joo Hyun LEE
  • Publication number: 20180268572
    Abstract: A makeup part generating apparatus includes a drawing receiver that receives a drawing operation of a makeup part image that is to be overlaid on a facial image, an information acquiring unit that acquires, at each time point in a process of the drawing operation, a progress image that is an image drawn by the time point, and a drawing technique used at the time point, and an information processor that records and outputs makeup part information including, in a time-series manner, image information indicating the progress image, and technique information indicating at least one of the drawing technique and a makeup technique that is an application technique of a cosmetic corresponding to the drawing technique.
    Type: Application
    Filed: May 24, 2018
    Publication date: September 20, 2018
    Inventors: CHIE NISHI, SACHIKO TAKESHITA, RIEKO ASAI, HIROKI TAOKA, MASAYO SHINODA
  • Publication number: 20180268573
    Abstract: An image acquisition unit acquires a plurality of projected images from a CT apparatus. A reconstruction unit reconstructs the plurality of projected images to produce a plurality of tomographic images. A scattered ray removal unit removes scattered ray components included in radiation transmitted through the subject from the plurality of projected images based on the tomographic images. The repetition unit performs repetition processing of repeating production of a new tomographic image obtained by reconstructing the projected images from which the scattered ray components are removed, and the removal of the scattered ray components from the plurality of projected images based on the new tomographic image.
    Type: Application
    Filed: January 31, 2018
    Publication date: September 20, 2018
    Applicant: FUJIFILM Corporation
    Inventor: Wataru FUKUDA
  • Publication number: 20180268574
    Abstract: A patient movement correction method for cone-beam computed tomography wherein a set of X-ray projection images of the patient is acquired using the X-ray imaging means. An initial projection geometry estimate describing the spatial positions and orientation of the X-ray source and the X-ray detector during the acquisition of an X-ray projection images is defined. An intermediate CBCT reconstruction using the X-ray projection images and the initial projection geometry estimate is computed. Projection-image-specific corrective geometric transformations are determined for the initial projection geometry estimate and the intermediate CBCT reconstruction.
    Type: Application
    Filed: March 19, 2018
    Publication date: September 20, 2018
    Inventors: Mikko LILJA, Kalle KARHU, Jaakko LAHELMA, Kustaa NYHOLM, Ari HIETANEN, Timo MULLER, Sakari KETTUNEN
  • Publication number: 20180268575
    Abstract: Digital Breast Tomosynthesis allows for the acquisition of volumetric mammography images. The present invention allows for novel ways of viewing such images to detect microcalcifications and obstructions. In an embodiment a method for displaying volumetric images comprises computing a projection image using a viewing direction, displaying the projection image and then varying the projection image by varying the viewing direction. The viewing direction can be varied based on a periodic continuous mathematical function. A graphics processing unit can be used to compute the projection image and bricking can be used to accelerate the computation of the projection images.
    Type: Application
    Filed: May 24, 2018
    Publication date: September 20, 2018
    Applicant: PME IP PTY LTD
    Inventors: MALTE WESTERHOFF, DETLEV STALLING
  • Publication number: 20180268576
    Abstract: A tool is provided via a user interface for a digital media application that supports digital illustrations. The tool combines operations to create different types of segments for a drawing shape and covert between types of segments. The tool is configured to analyze the drawing to recognize segments that are straight, arc, or curved portions of the drawing path. For segments recognized as curved, the segments are represented as Bezier curve segments. For segments recognized as straight, the segments are represented as line segments. Additionally, line segments are associated with handle elements operable to convert the line segments to regular arc segments. Responsive to manipulation of a handle element for a particular line segment, the tool computes a corresponding regular arc and converts the line segment into a regular arc segment.
    Type: Application
    Filed: May 17, 2018
    Publication date: September 20, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Anirudh Sasikumar, Tomas Krcha, Narciso Batacan Jaramillo, Talin Chris Wadsworth
  • Publication number: 20180268577
    Abstract: A method for presenting a visual implementation of data by a computing device comprising at least one processor and a system memory element is provided. The method executes a software application, by the at least one processor of the computing device; detects an operation performed by the software application during execution; monitors performance of the software application, by: identifying successful execution of the operation; and identifying timing data indicating a length of time for completion of the operation during execution of the software application; and presents visual feedback of the performance during execution of the software application, via a display element of the computing device, wherein the visual feedback comprises a first set of graphical elements indicating the successful execution and the timing data, and wherein the first set of graphical elements is presented onscreen with a concurrent presentation of the software application.
    Type: Application
    Filed: March 15, 2017
    Publication date: September 20, 2018
    Applicant: salesforce.com, inc.
    Inventor: Joao Neves
  • Publication number: 20180268578
    Abstract: Disclosed herein are system, method, and computer program product embodiments for generating and adjusting multi-dimensional data visualizations. An embodiment operates by a computer implemented method that includes evaluating, by at least one processor, data to be displayed on a multi-dimensional data visualization and information associated with the multi-dimensional data visualization. The method further includes determining one or more parameters for the multi-dimensional data visualization based on the evaluated data and the evaluated information. The method further includes generating the multi-dimensional data visualization based on the determined one or more parameters, where the multi-dimensional data visualization comprises at least four dimensions. The method also includes graphically displaying the multi-dimensional data visualization.
    Type: Application
    Filed: March 15, 2017
    Publication date: September 20, 2018
    Inventors: Malin WITTKOPF, Anca Maria Florescu, Christina Hall, Tatjana Borovikov, Guido Wagner, Klaus Herter, Felix Harling, Christian Knirsch, Christian Grail, Bogdan Alexander, Joachim Fiess, Hergen Siefken, Hee Tatt Ooi, Hans-Juergen Richstein, Marita Kruempelmann, Ingo Rues
  • Publication number: 20180268579
    Abstract: In one embodiment, a method includes receiving multiple recommendations for a first user from multiple second users and the multiple recommendations are associated with multiple objects. The multiple second users select the first user as the addresses of the multiple recommendations. The method also includes determining that the first user is near a geo-location and recommending to the first user multiple objects that are associated with the geo-location.
    Type: Application
    Filed: May 18, 2018
    Publication date: September 20, 2018
    Inventors: Peter Xiu Deng, Joshua Williams
  • Publication number: 20180268580
    Abstract: A display method includes: displaying, in a first screen area, a line segment indicating a passage of time from start to end of a manufacturing process in each of process, with time axes being aligned in a same direction between the processes, in a state being segmented for each process in order of execution of the process, for each of one or more products manufactured by a manufacturing system; and displaying, in a second screen area, a graph indicating a passage of time from start to end of a manufacturing process, with time axes being aligned in a same direction as time axes in the first screen area, based on information of a start time and an end time of a manufacturing process in one or more manufacturing facilities included in a selected first process among the processes, for each of the manufacturing facilities in the first process.
    Type: Application
    Filed: May 23, 2018
    Publication date: September 20, 2018
    Applicant: FUJITSU LIMITED
    Inventors: Satoshi Nomamoto, Takehiko Nishimura
  • Publication number: 20180268581
    Abstract: A method for graphically presenting sensor data from wind power installations is provided. A data interface of an apparatus receives sensor data from wind power installations and/or wind farms connected to the data interface. A memory stores the received sensor data. An input interface is used to select at least one current time or part time. Furthermore, a display interface is used to output signals for the purpose of presenting geographical maps and sensor values using a display. A plurality of sensor data or all sensor data for the selected time are retrieved from the memory, and the display interface outputs display signals to present the retrieved sensor signals with a map substantially at a position at which the wind power installation from which the respective sensor data were received is positioned. An apparatus for carrying out the method and a system having the apparatus are provided.
    Type: Application
    Filed: May 23, 2018
    Publication date: September 20, 2018
    Inventor: Simon DEMUTH
  • Publication number: 20180268582
    Abstract: In one embodiment, a method for designing an augmented-reality effect may include associating an image with an anchor position that defines a first relative point in the image and a second relative point in a first display region. The image may be associated with a first position offset, which may be used to define a first position of the image relative to the display region based on the first and second relative points. Information associated with the image may be stored in files, which may be configured to cause the image to be displayed at a second position in a second display region. A third relative point in the second display region may be defined by the anchor position. The first position offset may be used to define the second position of the image relative to the second display region based on the first and third relative points.
    Type: Application
    Filed: March 15, 2017
    Publication date: September 20, 2018
    Inventors: Guilherme Schneider, Stef Marc Smet
  • Publication number: 20180268583
    Abstract: Provided is a method for generating a single representative image from multi-view images, in which multi-view images captured from two or more viewpoints are combined into a single image using depth information, including identifying visible information from different viewpoints for pixels in each view image using depth information, identifying context of the pixels in all viewpoints using the acquired visible information and the depth information, and combining into a single image, compressing the combined image based on color information, and compressing viewpoint information of pixels in the compressed image.
    Type: Application
    Filed: March 16, 2017
    Publication date: September 20, 2018
    Inventors: Jun Yong Noh, Young Hui Kim, Kye Hyun Kim
  • Publication number: 20180268584
    Abstract: In one respect, there is provided a system that may include a processor and a memory. The memory may be configured to store instructions that results in operations when executed by the processor. The operations may include: processing an image set with a convolutional neural network configured to detect, in the image set, a first feature and a second feature; determining a respective effectiveness of the first feature and the second feature in enabling the convolutional neural network to classify images in the image set; determining, based at least on the respective effectiveness of the first feature and the second feature, a first initial weight for the first feature and a second initial weight for the second feature; and initializing the convolutional neural network for training, the initialization of the convolutional neural network comprising configuring the convolutional neural network to apply the first initial weight and the second initial weight.
    Type: Application
    Filed: March 20, 2017
    Publication date: September 20, 2018
    Inventors: LEONID BOBOVICH, ILIA RUTENBURG, MICHAEL KEMELMAKHER, RAN MOSHE BITTMANN
  • Publication number: 20180268585
    Abstract: The present disclosure relates to a character input method, a character input device, and a wearable device. The character input method includes: acquiring a background image; determining a portion of the background image as a control region; superimposedly displaying a character input interface and the background image; and moving the control region relative to the character input interface to select a particular character in the character input interface.
    Type: Application
    Filed: October 13, 2017
    Publication date: September 20, 2018
    Inventor: Yingjie LI
  • Publication number: 20180268586
    Abstract: Provided are an image synthesis apparatus, an image synthesis method, and a program thereof, capable of determining a combination of a synthesis candidate image and a background image of which impressions match each other. Plural synthesis candidate images are input, and impression values of the plural synthesis candidate images are determined with respect to plural impression axes. Further, plural background images are input, and impression values of the plural background images are determined with respect to the plural impression axes. A combination of a synthesis candidate image and a background image having a small difference between impression values is determined. The synthesis candidate image and the background image of the determined combination are synthesized to generate a synthetic image.
    Type: Application
    Filed: February 8, 2018
    Publication date: September 20, 2018
    Applicant: FUJIFILM Corporation
    Inventor: Hiroyuki FURUYA
  • Publication number: 20180268587
    Abstract: Techniques for dynamic ad hoc generation of customizable image-based files on computing device displays over interactive data networks are described, including detecting an input associated with an image, the input including data associated with one or more attributes of the image, generating an overlay configured to be at least partially transparent when visually rendered over the image, producing a file using the one or more attributes, the file including other data associated with the image, the overlay, and formatting and programmatic instructions configured to visually render the image and the overlay when another input is detected, and detecting the another input associated with placement of a visual rendering of the file and the overlay, the placement being disposed within a display window associated with an application or operating system configured, at least partially, to provide an electronic data communication function between two or more computing devices in data communication with each other in s
    Type: Application
    Filed: April 12, 2018
    Publication date: September 20, 2018
    Inventors: Kyler Blue, Jeffrey Sinckler, Zach Batteer, David McIntosh, Erick Hachenburg, Andrew DeClerck
  • Publication number: 20180268588
    Abstract: An information displaying system according to an embodiment of the present disclosure includes a first display section configured to display a time axis of detected signals along a first direction, a second display section configured to display a plurality of signal waveforms based on the detected signals in parallel so that the signal waveforms are arranged side by side in a second direction different from the first direction, and a controller configured to control the first display section and the second display section. When, in the second display section, a location on at least one of the plurality of the signal waveforms or near the at least one of the plurality of the signal waveforms is designated, the controller highlights the designated location, and displays a designated result on a time location in the first display section corresponding to the designated location.
    Type: Application
    Filed: May 23, 2018
    Publication date: September 20, 2018
    Applicant: Ricoh Company, Ltd.
    Inventors: Michinari SHINOHARA, Yutaka YAGIURA, Daisuke SAKAI
  • Publication number: 20180268589
    Abstract: A computing system and method to implement a three-dimensional virtual reality world with avatar posture animation without user posture tracking devices. A position and orientation of a respective avatar in the virtual reality world is tracked to generate a view of the virtual world for the avatar and to present the avatar to others. In response to input data tracking a position, orientation, and motions of a head of a user of the virtual reality world, the server system uses a posture model to predict, from the input data, a posture of an avatar of the user in the virtual reality world, and computes an animation of the avatar showing the posture of the avatar in the virtual reality world.
    Type: Application
    Filed: March 16, 2017
    Publication date: September 20, 2018
    Inventor: Jeremiah Arthur Grant
  • Publication number: 20180268590
    Abstract: Techniques of animating objects in VR involve applying a motion filter to the object that varies with vertices on an object. Along these lines, a VR computer generates an object for an interactive, three-dimensional game by generating a triangular mesh approximating the object surface and bones including vertices defining motion of the vertices based on motion of an anchor vertex. When a user selects a vertex of the object as an anchor vertex about which to move the object, the VR computer generates variable filters for each bone that restrict the motion of that bone based on the distance of that bone from the anchor vertex. Accordingly, when the user produces a gesture with a controller that defines a path of motion for the anchor vertex, the bone including the anchor vertex goes through an unfiltered motion while bones remote from the anchor vertex go through a more restricted motion.
    Type: Application
    Filed: March 20, 2017
    Publication date: September 20, 2018
    Inventor: Francois Chabot
  • Publication number: 20180268591
    Abstract: Disclosed is a real-time motion simulation method for hair and object collisions, which is based on a small amount of pre-computation training data and generates a self-adaptive simplified model for virtual hair style for real-time selection and interpolation and collision correction, thereby realizing real-time high-quality motion simulation for hair-object collisions. The method comprises the following steps: 1) reduced model pre-computation: based on pre-computation simulation data, selecting representative hairs and generating a reduced model; 2) real-time animation and interpolation: clustering the representative hairs simulated in real time; selecting the reduced model and interpolating; and 3) collision correction: detecting collision and applying a correction force on the representative hairs to correct the collisions. The present invention proposed a real-time simulation method for hair-object collision, which achieves similar effect as off-line simulation and reduces the computation time cost.
    Type: Application
    Filed: February 15, 2015
    Publication date: September 20, 2018
    Applicant: Zhejiang University
    Inventors: Kun ZHOU, Menglei CHAI, Changxi ZHENG
  • Publication number: 20180268592
    Abstract: A two-dimensional image is transformed into at least one portion of a human or animal body into a three-dimensional model. An image is acquired that includes the at least one portion of the human or animal body. An identification is made of the at least one portion within the image. Searches are made for features indicative of the at least one portion of the human or animal body within the at least one portion. One or more identifications are made of a set of landmarks corresponding to the features. An alignment is a deformable mask including the set of landmarks. The deformable mask includes a number of meshes corresponding to the at least one portion of the human or animal body. The 3D model is animated by dividing it into concentric rings, quasi rings and applying different degrees of rotation to each ring.
    Type: Application
    Filed: February 9, 2018
    Publication date: September 20, 2018
    Inventors: Massimiliano Tarquini, Olivier C. De Keyser, Allessandro Ligi
  • Publication number: 20180268593
    Abstract: A method for creating a computer simulation of a crowd. An apparatus for creating a computer simulation of an actor. A method for creating a computer simulation of an actor.
    Type: Application
    Filed: May 17, 2018
    Publication date: September 20, 2018
    Inventor: Kenneth Perlin
  • Publication number: 20180268594
    Abstract: A vehicular display device includes an image display device disposed on a position ahead of a driver in a vehicle and displays images, a frame member disposed on a driver side of the image display device and surrounds a part of an image display region on the image display device, and a drive device that moves the frame member relative to the image display device along the image display region. The image display device displays a predetermined image that converges from the frame member toward a convergence point set in advance in the image display region, and, when the drive device moves the frame member, the image display device performs animation display of deforming the predetermined image so as to follow the movement of the frame member while keeping the convergence point fixed.
    Type: Application
    Filed: February 27, 2018
    Publication date: September 20, 2018
    Applicant: Yazaki Corporation
    Inventors: Kazumasa SHOJI, Takayuki ONO
  • Publication number: 20180268595
    Abstract: A system and method for generating cartoon images from photos are described. The method includes receiving an image of a user, determining a template for a cartoon avatar, determining an attribute needed for the template, processing the image with a classifier trained for classifying the attribute included in the image, determining a label generated by the classifier for the attribute, determining a cartoon asset for the attribute based on the label, and rendering the cartoon avatar personifying the user using the cartoon asset.
    Type: Application
    Filed: March 14, 2018
    Publication date: September 20, 2018
    Inventors: Aaron Sarna, Dilip Krishnan, Forrester Cole, Inbar Mosseri
  • Publication number: 20180268596
    Abstract: Aspects described herein may provide improved display of a fluid surface in a virtual environment. Adjusted pixel colors for pixels in the display may be computer based on surface shadow components and volumetric shadow components. A surface shadow component may be based on a shadow cast by objects onto the fluid surface. A volumetric shadow component may be based on a shadow cast by objects within a body of fluid. Visual effects may be adjusted based on the surface shadow components and volumetric shadow components, and an adjusted pixel color may be determined based on the adjusted visual effects.
    Type: Application
    Filed: May 17, 2018
    Publication date: September 20, 2018
    Inventor: Yury Kryachko
  • Publication number: 20180268597
    Abstract: Embodiments provide for a graphics processing apparatus comprising render logic to detect rendering operations that will result in framebuffer having the same data as the initial clear color value and morphing such rendering operations to optimizations that are typically done for initial clearing of the framebuffer.
    Type: Application
    Filed: May 24, 2018
    Publication date: September 20, 2018
    Inventors: Bimal Poddar, Prasoonkumar Surti, Rahul P. Sathe
  • Publication number: 20180268598
    Abstract: A data processing device applies pattern data to surface data on the basis of projection data, subdividing the surface data into voxels and determining the optimum projection data for each voxel according to its surroundings.
    Type: Application
    Filed: June 28, 2016
    Publication date: September 20, 2018
    Inventors: Sylvain LEFEBVRE, Jeremie DUMAS, An LU
  • Publication number: 20180268599
    Abstract: Systems and methods of geometry processing, for rasterization and ray tracing processes provide for pre-processing of source geometry, such as by tessellating or other procedural modification of source geometry, to produce final geometry on which a rendering will be based. An acceleration structure (or portion thereof) for use during ray tracing is defined based on the final geometry. Only coarse-grained elements of the acceleration structure may be produced or retained, and a fine-grained structure within a particular coarse-grained element may be Produced in response to a collection of rays being ready for traversal within the coarse grained element. Final geometry can be recreated in response to demand from a rasterization engine, and from ray intersection units that require such geometry for intersection testing with primitives. Geometry at different resolutions can be generated to respond to demands from different rendering components.
    Type: Application
    Filed: May 21, 2018
    Publication date: September 20, 2018
    Inventors: John W. Howson, Luke T. Peterson
  • Publication number: 20180268600
    Abstract: Systems and methods of geometry processing, for rasterization and ray tracing processes provide for pre-processing of source geometry, such as by tessellating or other procedural modification of source geometry, to produce final geometry on which a rendering will be based. An acceleration structure (or portion thereof) for use during ray tracing is defined based on the final geometry. Only coarse-grained elements of the acceleration structure may be produced or retained, and a fine-grained structure within a particular coarse-grained element may be Produced in response to a collection of rays being ready for traversal within the coarse grained element. Final geometry can be recreated in response to demand from a rasterization engine, and from ray intersection units that require such geometry for intersection testing with primitives. Geometry at different resolutions can be generated to respond to demands from different rendering components.
    Type: Application
    Filed: May 21, 2018
    Publication date: September 20, 2018
    Inventors: John W. Howson, Luke T. Peterson
  • Publication number: 20180268601
    Abstract: The present disclosure describes methods, apparatuses, and non-transitory computer-readable mediums for estimating a three-dimensional (“3D”) pose of an object from a two-dimensional (“2D”) input image which contains the object. Particularly, certain aspects of the disclosure are concerned with 3D pose estimation of a symmetric or nearly-symmetric object. An image or a patch of an image includes the object. A classifier is used to determine whether a rotation angle of the object in the image or the patch of the image is within a first predetermined range. In response to a determination that the rotation angle is within the first predetermined range, a mirror image of the object is determined. Two-dimensional (2D) projections of a three-dimensional (3D) bounding box of the object are determined by applying a trained regressor to the mirror image of the object in the image or the patch of the image. The 3D pose of the object is estimated based on the 2D projections.
    Type: Application
    Filed: August 31, 2017
    Publication date: September 20, 2018
    Inventors: Mahdi Rad, Markus Oberweger, Vincent Lepetit
  • Publication number: 20180268602
    Abstract: Graphics processing systems can include lighting effects when rendering images. “Light probes” are directional representations of lighting at particular probe positions in the space of a scene which is being rendered. Light probes can be determined iteratively, which can allow them to be determined dynamically, in real-time over a sequence of frames. Once the light probes have been determined for a frame then the lighting at a pixel can be determined based on the lighting at the nearby light probe positions. Pixels can then be shaded based on the lighting determined for the pixel positions.
    Type: Application
    Filed: May 23, 2018
    Publication date: September 20, 2018
    Inventors: Jens Fursund, Luke T. Peterson
  • Publication number: 20180268603
    Abstract: Graphics processing systems can include lighting effects when rendering images. “Light probes” are directional representations of lighting at particular probe positions in the space of a scene which is being rendered. Light probes can be determined iteratively, which can allow them to be determined dynamically, in real-time over a sequence of frames. Once the light probes have been determined for a frame then the lighting at a pixel can be determined based on the lighting at the nearby light probe positions. Pixels can then be shaded based on the lighting determined for the pixel positions.
    Type: Application
    Filed: May 23, 2018
    Publication date: September 20, 2018
    Inventors: Jens Fursund, Luke T. Peterson
  • Publication number: 20180268604
    Abstract: Systems and methods that facilitate efficient and effective shadow image generation are presented. In one embodiment, a hard shadow generation system comprises a compute shader, pixel shader and graphics shader. The compute shader is configured to retrieve pixel depth information and generate projection matrix information, wherein the generating includes performing dynamic re-projection from eye-space to light space utilizing the pixel depth information. The pixel shader is configured to create light space visibility information. The graphics shader is configured to perform frustum trace operations to produce hard shadow information, wherein the frustum trace operations utilize the light space visibility information. The light space visibility information can be considered irregular z information stored in an irregular z-buffer.
    Type: Application
    Filed: March 15, 2018
    Publication date: September 20, 2018
    Inventor: Jon Story
  • Publication number: 20180268605
    Abstract: Methods and systems for space situational awareness (SSA) demonstration are provided.
    Type: Application
    Filed: May 4, 2018
    Publication date: September 20, 2018
    Inventors: BIN JIA, SIXIAO WEI, ZHIJIANG CHEN, GENSHE CHEN, KHANH PHAM, ERIK BLASCH
  • Publication number: 20180268606
    Abstract: A server device and a method for the server device to build a model object are described. The server device includes interface circuitry and processing circuitry. The interface circuitry is configured to receive an instruction to build a model object at the server device. The instruction is sent by a client device (e.g., user equipment) that requests services from the server device in a three-dimensional (3D) application. The client device builds the model object in a first form according to first modeling data. The processing circuitry of the server device is configured to determine, second modeling data of the model object according to the instruction, and build the model object of a second form according to the second modeling data. Then model object of the second form is used in the 3D application at the server device.
    Type: Application
    Filed: May 21, 2018
    Publication date: September 20, 2018
    Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xichang MO, Bailin AN
  • Publication number: 20180268607
    Abstract: A system and method for data manipulation based on real world object manipulation is described. A device captures an image of a physical object. The image is communicated via a network to a remote server. The remote server includes virtual object data associated with the image and a communication notification for a user of the computing device. The device receives the virtual object data and displays the virtual image in a virtual landscape using the virtual object data. In response to relative movement between the computing device and the physical object caused by the user, the virtual image is modified.
    Type: Application
    Filed: March 15, 2017
    Publication date: September 20, 2018
    Inventor: Brian Mullins
  • Publication number: 20180268608
    Abstract: In one embodiment, a method for designing an augmented-reality effect may include associating, by a computing device, a first visual object with a first rendering order specified by a user. A second visual object may be associated with a second rendering order specified by the user. The first and second visual objects may be defined in a three-dimensional space. Information associated with the first visual object, the first rendering order, the second visual object, and the second rendering order may be stored in one or more files. The one or more files may be configured to cause the first visual object and the second visual object to be rendered sequentially in an order determined based on the first rendering order and the second rendering order. The first visual object and the second visual object may be rendered to generate a scene in the three-dimensional space.
    Type: Application
    Filed: March 15, 2017
    Publication date: September 20, 2018
    Inventors: Guilherme Schneider, Stef Marc Smet, Siarhei Hanchar
  • Publication number: 20180268609
    Abstract: In one embodiment, a method for designing an augmented-reality effect may include displaying, by a computing device, a video within a user interface. The video may comprise an object, such a person's face. The object may be associated with a tracker in response to a first instruction from a user. The tracker may be displayed in the video and may be configured to move according to movements of the object. An augmented-reality object may be associated with the tracker in response to a second instruction from the user. The augmented-reality object may be displayed in the video and may be configured to move according to movements of the tracker. Then, one or more defined relationships between the tracker and the augmented-reality object may be stored in one or more files.
    Type: Application
    Filed: March 15, 2017
    Publication date: September 20, 2018
    Inventors: Guilherme Schneider, Stef Marc Smet
  • Publication number: 20180268610
    Abstract: A computer implemented method for warping virtual content from two sources includes a first source generating first virtual content based on a first pose. The method also includes a second source generating second virtual content based on a second pose. The method further includes a compositor processing the first and second virtual content in a single pass. Processing the first and second virtual content includes generating warped first virtual content by warping the first virtual content based on a third pose, generating warped second virtual content by warping the second virtual content based on the third pose, and generating output content by compositing the warped first and second virtual content.
    Type: Application
    Filed: March 16, 2018
    Publication date: September 20, 2018
    Applicant: MAGIC LEAP, INC.
    Inventors: Reza NOURAI, Robert Blake TAYLOR
  • Publication number: 20180268611
    Abstract: Disclosed is an improved approach for generated recordings from augmented reality systems from the perspective of a camera within the system. Instead of re-using rendered virtual content from the perspective of the user's eyes for AR recordings, additional virtual content is rendered from an additional perspective specifically for the AR recording. That additional virtual content is combined with image frames generated by a camera to form the AR recording.
    Type: Application
    Filed: March 16, 2018
    Publication date: September 20, 2018
    Inventors: Reza Nourai, Michael Harold Liebenow, Robert Blake Taylor, Robert Wyatt
  • Publication number: 20180268612
    Abstract: Systems and methods of simulating first-person control of remoted-controlled vehicles are described herein. The system may include one or more of a remote-controlled (RC) vehicle, a display interface, an input interface, and/or other components. The RC vehicle may have an image capturing device configured to capture in-flight images. View information representing the captured images may presented on a display worn and/or otherwise accessible to user. The input interface may allow the user to provide control inputs for dictating a path of the RC vehicle. Augmented reality graphics may be overlaid on the view information presented to the user to facilitate gameplay and/or otherwise enhance a user's experience.
    Type: Application
    Filed: May 15, 2018
    Publication date: September 20, 2018
    Inventors: Joseph Logan Olson, Michael P. Goslin, Clifford Wong, Timothy Panec
  • Publication number: 20180268613
    Abstract: A system, apparatus, device, or method to output different iterations of data entities. The method may include establishing a first data entity; establishing a first state for the first data entity. The method may include establishing a second state for the first data entity. The method may include storing the first data entity, the first state, and the second state at a storage device. The method may include retrieving a first iteration of the first data entity exhibiting at least a portion of the first state. The method may include retrieving a second iteration of the first data entity exhibiting at least a portion of the second state. The method may include outputting the first iteration and the second iteration at an output time.
    Type: Application
    Filed: May 16, 2018
    Publication date: September 20, 2018
    Inventors: Sina Fateh, Ron Butterworth, Mohamed Nabil Hajj Chehade, Allen Yang Yang, Sleiman Itani
  • Publication number: 20180268614
    Abstract: A method includes presenting a computer-aided design (CAD) model via a graphical-user-interface (GUI) on a display, in a first orientation view; presenting one or more PMI objects oriented towards a first normal vector associated with the first orientation view, having a first orientation; identifying a second orientation view of the CAD model; calculating a second normal vector associated with the second orientation view; identifying the one or more PMI objects oriented towards the first normal vector; and orienting the identified one or more PMI objects towards the second normal vector, such that the identified one or more PMI objects are aligned in the second orientation view, having a second orientation.
    Type: Application
    Filed: March 16, 2017
    Publication date: September 20, 2018
    Inventors: Jason Anton Byers, Miller Glenn Byrd, Brian Christopher Wheeler
  • Publication number: 20180268615
    Abstract: In one embodiment, a method for designing an augmented-reality effect may include receiving a model definition of a virtual object. The virtual object may be rendered in a 3D space based on the model definition. The system may display the virtual object in the 3D space from a first perspective in a first display area of a user interface. The system may display the virtual object in the 3D space from a second perspective, different from the first, in a second display area of the user interface. The system may receive a user command input by a user through the first display area for adjusting the virtual object. The virtual object may be adjusted according to the user command. The system may display the adjusted virtual object in the 3D space from the first perspective in the first display area and from the second perspective in the second display area.
    Type: Application
    Filed: June 6, 2017
    Publication date: September 20, 2018
    Inventors: Stef Marc Smet, Dolapo Omobola Falola, Michael Slater, Samantha P. Krug, Volodymyr Giginiak, Hannes Luc Herman Verlinde, Sergei Viktorovich Anpilov, Danil Gontovnik, Yu Hang Ng, Siarhei Hanchar, Milen Georgiev Dzhumerov
  • Publication number: 20180268616
    Abstract: A method of generating 3D printing data performed by an apparatus for generating 3D printing data includes generating a 3D model of an object; generating a surface height map from a texture image indicating a surface texture of the object; setting an area in which the surface height map is projected on a surface of the 3D model; slicing the 3D model into a plurality of cross-section segments; and correcting a shape of at least a portion among the cross-section segments in consideration of the area in which the surface height map is projected on the 3D model.
    Type: Application
    Filed: March 13, 2018
    Publication date: September 20, 2018
    Inventors: Yoon Seok CHOI, Seung Woo NAM, Soon Chul JUNG, In Su JANG, Jin Seo KIM
  • Publication number: 20180268617
    Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: obtaining user information of a vehicle driver user, the vehicle driver user being a user of a computer based system for managing a parking area; processing information of the user information; and outputting a communication to control an indicator system based on the processing, wherein the indicator system is provided as a fixture of the parking area and wherein the indicator system is configured to provide indications viewable by vehicle drivers driving within the parking area.
    Type: Application
    Filed: March 20, 2017
    Publication date: September 20, 2018
    Inventors: Edwin J. BRUCE, Romelia H. FLORES