Hidden Line/surface Determining Patents (Class 345/421)
  • Patent number: 10297077
    Abstract: Systems and methods for implementing hidden mesh (or stencil mesh) graphics rendering techniques for use in applications such as head mounted displays (“HMDs”) are described. Exemplary systems and algorithms are disclosed for masking or eliminating pixels in an image from the list of pixels to be rendered, based on the observation that a significant number of pixels in the periphery of HMD images cannot be seen, due to the specific details of the optical and display/electronics performance of each particular implementation.
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: May 21, 2019
    Assignee: Valve Corporation
    Inventor: Alex Vlachos
  • Patent number: 10290145
    Abstract: Volume rendering is performed by a method, comprising: obtaining an original volume data, transforming the original volume data based on a distance from a viewpoint to the original volume data, to generate transformed volume data, generating particles from the transformed volume data, and projecting the particles on an image plane to obtain a 2D image corresponding to the original volume data.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: May 14, 2019
    Assignee: International Business Machines Corporation
    Inventors: Yasunori Yamada, Kun Zhao
  • Patent number: 10282892
    Abstract: Volume rendering is performed by a method, comprising: obtaining an original volume data, transforming the original volume data based on a distance from a viewpoint to the original volume data, to generate transformed volume data, generating particles from the transformed volume data, and projecting the particles on an image plane to obtain a 2D image corresponding to the original volume data.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: May 7, 2019
    Assignee: International Business Machines Corporation
    Inventors: Yasunori Yamada, Kun Zhao
  • Patent number: 10282952
    Abstract: An archival video system uses profile images as a background for an image and delta images to indicate the difference between a current image and a profile image. An image may be segmented into multiple sectors, with each sector compared to a profile sector. The resulting image may be constructed using references to previously stored sectors from different images.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: May 7, 2019
    Assignee: TROVER GROUP INC.
    Inventors: Charles W. Dozier, Thomas W. Mitchell
  • Patent number: 10275942
    Abstract: It is proposed a computer-implemented method for compressing a three-dimensional modeled object. The method comprises: providing a mesh of the three-dimensional modeled object; parameterizing (u,v) the mesh in a two-dimensional plane, the parameterization of the mesh resulting in a set of vertices having two-dimensional coordinates; providing a grid on the two-dimensional plane; and modifying the two-dimensional coordinates of each vertex by assigning one vertex to one intersection of the grid. Such compression method is lossless, completely reversible, suitable to efficiently reduce the storage size of a CAD file.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: April 30, 2019
    Assignee: Dassault Systemes
    Inventor: Jean Julien Tuffreau
  • Patent number: 10262460
    Abstract: Three dimensional (3D) operational user interface generation systems and methods are described herein. For example, one or more embodiments include calculating a number of locations for a virtual camera, placing the virtual camera at each of the number of locations, generating a 3D image at each of the number of locations, implementing object information for each pixel of the 3D image, and generating a 3D user interface utilizing the 3D image at each of the number of locations.
    Type: Grant
    Filed: September 10, 2013
    Date of Patent: April 16, 2019
    Assignee: Honeywell International Inc.
    Inventors: Henry Chen, Jian Geng Du, Yan Xia, Liana M. Kiff
  • Patent number: 10262231
    Abstract: Provided is a method of spatially referencing a plurality of images captured from a plurality of different locations within an indoor space by determining the location from which the plurality of images was captured. The method may include obtaining a plurality of distance-referenced panoramas of an indoor space. The distance-referenced panoramas may each include a plurality of distance-referenced images each captured from one position in the indoor space and at a different azimuth from the other distance-referenced images, a plurality of distance measurements, and orientation indicators each indicative of the azimuth of the corresponding one of the distance-referenced images. The method may further include determining the location of each of the distance-referenced panoramas based on the plurality of distance measurements and the orientation indicators and associating in memory the determined locations with the plurality of distance-referenced images captured from the determined location.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: April 16, 2019
    Assignee: Google LLC
    Inventors: Alexander Thomas Starns, Arjun Raman, Gadi Royz
  • Patent number: 10257587
    Abstract: A sparse streaming system provides a first-class means for sparse metadata to be added to streaming media presentations and to be delivered using an integrated data channel that is cacheable using readily available HTTP-based Internet caching infrastructure for increased scalability. The sparse streaming system stores a reference to a sparse track within a continuous track. If a continuous fragment arrives at the client that refers to a sparse fragment that the client has not yet retrieved, then the client requests the sparse fragment. In addition, each sparse fragment may include a backwards reference to the sparse fragment created immediately prior. The references in the continuous fragments make the client aware of new sparse track fragments, and the backwards references in the sparse track fragments ensure that the client has not missed any intervening sparse track fragments.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: April 9, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John A. Bocharov, Geqiang (Sam) Zhang, Krishna Prakash Duggaraju, Lin Liu, Anirban Roy, Jack E. Freelander, Vishal Sood
  • Patent number: 10242485
    Abstract: An apparatus, computer readable medium, and method are disclosed for performing an intersection query between a query beam and a target bounding volume. The target bounding volume may comprise an axis-aligned bounding box (AABB) associated with a bounding volume hierarchy (BVH) tree. An intersection query comprising beam information associated with the query beam and slab boundary information for a first dimension of a target bounding volume is received. Intersection parameter values are calculated for the first dimension based on the beam information and the slab boundary information and a slab intersection case is determined for the first dimension based on the beam information. A parametric variable range for the first dimension is assigned based on the slab intersection case and the intersection parameter values and it is determined whether the query beam intersects the target bounding volume based on at least the parametric variable range for the first dimension.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: March 26, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine, John Erik Lindholm
  • Patent number: 10242493
    Abstract: An apparatus and method for performing coarse pixel shading (CPS). For example, one embodiment of a method comprises: A method for coarse pixel shading (CPS) comprising: pre-processing a graphics mesh by creating a tangent-plane parameterization of desired vertex attributes for each vertex of the mesh; and performing rasterization of the mesh in a rasterization stage of a graphics pipeline using the tangent-plane parameterization.
    Type: Grant
    Filed: June 30, 2014
    Date of Patent: March 26, 2019
    Assignee: Intel Corporation
    Inventors: Gabor Liktor, Marco Salvi, Karthik Vaidyanathan
  • Patent number: 10217276
    Abstract: Systems and methods of creating a 3 Dimensional (3D) model of an object suitable for 3D printing are described. A method comprises defining an initial cuboid of edge lengths Lx, Ly, Lz for creating the 3D model, wherein the initial cuboid implicitly includes inner cuboids such that starting from the initial cuboid each cuboid is recursively splitable into eight identical inner cuboids. Further, the method comprises iteratively, receiving an input specifying a size of an inner cuboid to be modified and a selection of a point on the screen. Based on the received user input, at least one inner cuboid which is to be modified is identified. Once the inner cube to be modified is identified, the inner cube may be modified by marking the at least one inner cuboid as filled or empty.
    Type: Grant
    Filed: April 16, 2015
    Date of Patent: February 26, 2019
    Assignee: 3D SLASH
    Inventor: Sylvain Huet
  • Patent number: 10217267
    Abstract: Systems and methods for producing an acceleration structure provide for subdividing a 3-D scene into a plurality of volumetric portions, which have different sizes, each being addressable using a multipart address indicating a location and a relative size of each volumetric portion. A stream of primitives is processed by characterizing each according to one or more criteria, selecting a relative size of volumetric portions for use in bounding the primitive, and finding a set of volumetric portions of that relative size which bound the primitive. A primitive ID is stored in each location of a cache associated with each volumetric portion of the set of volumetric portions. A cache location is selected for eviction, responsive to each cache eviction decision made during the processing. An element of an acceleration structure according to the contents of the evicted cache location is generated, responsive to the evicted cache location.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: February 26, 2019
    Assignee: Imagination Technologies Limited
    Inventors: James A McCombe, Aaron Dwyer, Luke T Peterson, Neils Nesse
  • Patent number: 10204436
    Abstract: A method for authoring and displaying a virtual tour of a three-dimensional space which employs transitional effects simulating motion. An authoring tool is provided for interactively defining a series of locations in the space for which two-dimensional images, e.g., panoramas, photographs, etc., are available. A user identifies one or more view directions for a first-person perspective viewer for each location. For pairs of locations in the series, transitional effects are identified to simulate smooth motion between the pair of locations. The authoring tool stores data corresponding to the locations, view directions and transitional effects for playback on a display. When the stored data is accessed, a virtual tour of the space is created that includes transitional effects simulating motion between locations. The virtual tour created can allow a viewer to experience the three-dimensional space in a realistic manner.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: February 12, 2019
    Assignee: EveryScape, Inc.
    Inventors: Byong Mok Oh, James Schoonmaker, Sidney Chang
  • Patent number: 10204598
    Abstract: Displaying a plurality of encoded media items on a device includes: detecting that a first scrolling action has been completed; determining a predicted next encoded media item to be displayed; obtaining the predicted next encoded media item from a first memory; pre-decoding the predicted next encoded media item to generate a pre-decoded media item; storing the pre-decoded media item in a second memory, the second memory having lower latency than the first memory; receiving an indication that a second scrolling action has begun; and in response to the second scrolling action, displaying the pre-decoded media item via a display interface.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: February 12, 2019
    Assignee: Facebook, Inc.
    Inventors: Philip McAllister, Shayne Sweeney
  • Patent number: 10192362
    Abstract: A content visualization system generates visual content for a visualization device based on visual content of a live event. The content visualization system collects visual content and source perspective data from visual content sources. The visualization device requests visual content from the content visualization system by providing device perspective data to the content visualization system. The content visualization system generates visual content for the visualization device based on the visual content from the visual content sources, the source perspective data, and the device perspective data. The content visualization system can determine visual content that is relevant to the device perspective by identifying source perspectives that overlap with the device perspective. The content visualization system generates visual content for the visualization device based on the identified visual content.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: January 29, 2019
    Assignee: GoPro, Inc.
    Inventors: Scott Patrick Campbell, Gary Fong
  • Patent number: 10162615
    Abstract: A computer system for compiling a source code into an object code includes means for converting the source code into an intermediate code, means for generating a modified-intermediate code by replacing a first command that satisfies a condition, among commands included in the intermediate code, with a conditional branch command, and means for converting the modified-intermediate code into the object code.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: December 25, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: In-Ho Lee, Wee-Young Jeong
  • Patent number: 10163263
    Abstract: The technology uses image content to facilitate navigation in panoramic image data. Aspects include providing a first image including a plurality of avatars, in which each avatar corresponds to an object within the first image, and determining an orientation of at least one of the plurality of avatars to a point of interest within the first image. A viewport is determined for a first avatar in accordance with the orientation thereof relative to the point of interest, which is included within the first avatar's viewport. In response to received user input, a second image is selected that includes at least a second avatar and the point of interest from the first image. A viewport of the second avatar in the second image is determined and the second image is oriented to align the second avatar's viewpoint with the point of interest to provide navigation between the first and second images.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: December 25, 2018
    Assignee: Google LLC
    Inventors: Jiajun Zhu, Daniel Joseph Filip, Luc Vincent
  • Patent number: 10150034
    Abstract: Embodiments disclosed herein provide systems and methods for blending real world choreographed media within a virtual world, wherein the choreographed real world media is inserted into a moving template within the virtual world. Embodiments utilize software and camera hardware configured to capture real world media, wherein the software may insert the real world media within a template to insert images of a user within the virtual world. Embodiments may allow media capturing choreographed movements to be placed within a moving template within the virtual world.
    Type: Grant
    Filed: April 10, 2017
    Date of Patent: December 11, 2018
    Inventor: Charles Chungyohl Lee
  • Patent number: 10140755
    Abstract: A three-dimensional (3D) rendering method and apparatus is disclosed. The 3D rendering apparatus determines a vertex for a first shading from among vertices of a 3D model based on characteristic information of the 3D model, performs the first shading on the determined vertex, determines a pixel area for a second shading based on reference information indicating whether the first shading is applied to at least one vertex comprising the pixel area, performs the second shading on the determined pixel area, and generates a rendered image based on the first shading and the second shading.
    Type: Grant
    Filed: March 22, 2016
    Date of Patent: November 27, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungin Park, Minsu Ahn, Minjung Son, Hyong Euk Lee, Inwoo Ha
  • Patent number: 10140391
    Abstract: A method of ray tracing by parallel computing on a computer system including a plurality of CPU's, for use in a simulation or calculation process, the method including balancing a plurality of radiation tiles between said plurality of CPU's.
    Type: Grant
    Filed: July 1, 2015
    Date of Patent: November 27, 2018
    Assignee: MAGMA GIESSEREITECHNOLOGIE GMBH
    Inventor: Jakob Fainberg
  • Patent number: 10134199
    Abstract: Techniques for animating a non-rigid object in a computer graphics environment. A three-dimensional (3D) curve rigging element representing the non-rigid object is defined, the 3D curve rigging element comprising a plurality of knot primitives. One or more defined values are received for an animation control attribute of a first knot primitive. One or more values are generated, for a second animation control attribute for a second knot primitive, based on the plurality of animation control attributes of a neighboring knot primitive. An animation is then rendered using the 3D curve rigging element. More specifically, one or more defined values for the first attribute of the first knot primitive and the generated value for the second attributes of the second knot primitive are used to generate the animation. The rendered animation is output for display.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: November 20, 2018
    Assignee: Pixar
    Inventors: Mark C. Hessler, Jeremie Talbot, Mark Piretti, Kevin A. Singleton
  • Patent number: 10133711
    Abstract: A display apparatus is disclosed, the display apparatus including: communication circuitry configured to receive a web-based content comprising a plurality of objects; a display configured to display an image; a memory configured to be loaded with data of the image displayed on the display; and at least one processor configured to load data of a first object in the memory and to not load data of a second object in the memory if an area of the first object is displayed to cover areas of one or more second objects of the plurality of objects of the web-based contents.
    Type: Grant
    Filed: April 6, 2016
    Date of Patent: November 20, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Chang Yeon Kim, Jae Young Myo
  • Patent number: 10133929
    Abstract: A positioning method for unmanned aerial vehicle is disclosed. A first photo and a second photo of a predetermined form are first obtained, a first color card image of a color card is recognized from the first photo, and a second color card image of the same color card is recognized from the second photo, wherein the predetermined form includes a number of color cards. A first geometric center point coordinate and a first barycentric point coordinate of the first color card image are calculated. A reference line on the color card is obtained by mapping the first geometric center point coordinate and the first barycentric point coordinate to the color card, and a rotation angle of the UAV based on the reference line is obtained. A positioning method to get the flight speed for the UAV and a positioning device is also disclosed.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: November 20, 2018
    Assignee: ZEROTECH (SHENZHEN) INTELLIGENCE ROBOT CO., LTD.
    Inventors: Qing Pu, Jian-Jun Yang
  • Patent number: 10134160
    Abstract: Visibility may be analytically resolved rather than using point-sampling, thereby entirely avoiding geometric aliasing and the need to store multiple samples per pixel. By relying on existing techniques for shading, i.e., by shading once per fragment and focusing on visibility, visual results may be equivalent to multi-sampled anti-aliasing (MSAA) using an infinite sampling rate in some embodiments.
    Type: Grant
    Filed: September 26, 2013
    Date of Patent: November 20, 2018
    Assignee: Intel Corporation
    Inventor: Franz P. Clarberg
  • Patent number: 10132633
    Abstract: The technology causes disappearance of a real object in a field of view of a see-through, mixed reality display device system based on user disappearance criteria. Image data is tracked to the real object in the field of view of the see-through display for implementing an alteration technique on the real object causing its disappearance from the display. A real object may satisfy user disappearance criteria by being associated with subject matter that the user does not wish to see or by not satisfying relevance criteria for a current subject matter of interest to the user. In some embodiments, based on a 3D model of a location of the display device system, an alteration technique may be selected for a real object based on a visibility level associated with the position within the location. Image data for alteration may be prefetched based on a location of the display device system.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: November 20, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James C. Liu, Stephen G. Latta, Benjamin I. Vaught, Christopher M. Novak, Darren Bennett
  • Patent number: 10127434
    Abstract: Described are techniques for indoor mapping and navigation. A reference mobile device including sensors to capture range, depth and position data and processes such data. The reference mobile device further includes a processor that is configured to process the captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit, execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping, integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment.
    Type: Grant
    Filed: July 15, 2016
    Date of Patent: November 13, 2018
    Assignee: Tyco Fire & Security GmbH
    Inventors: Manjuprakash Rama Rao, Rambabu Chinta, Surajit Borah
  • Patent number: 10127722
    Abstract: This application generally relates to systems and methods for generating and rendering visualizations of an object or environment using 2D and 3D image data of the object or the environment captured by a mobile device. In one embodiment, a method includes providing, by the system, a representation of a 3D model of an environment from a first perspective of the virtual camera relative to the 3D model, receiving, by the system, input requesting movement of the virtual camera relative to the 3D model, and selecting, by the system, a first 2D image from a plurality of two dimensional images associated with different capture positions and orientations relative to the 3D model based on association of a capture position and orientation of the first 2D image with a second perspective of the virtual camera relative to the 3D model determined based on the movement.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: November 13, 2018
    Assignee: Matterport, Inc.
    Inventors: Babak Robert Shakib, Kevin Allen Bjorke, Matthew Tschudy Bell
  • Patent number: 10127483
    Abstract: When time required for print processing is estimated, estimation in consideration of overlap between objects is performed in such a manner that the objects are regarded as objects with a predetermined simple shape.
    Type: Grant
    Filed: September 6, 2016
    Date of Patent: November 13, 2018
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Hiroyuki Nakane
  • Patent number: 10109107
    Abstract: A method and system of representing a virtual object in a view of a real environment is provided which includes providing image information of a first image of at least part of a human face captured by a camera, providing at least one human face specific characteristic, determining an image area of the face in the first image as a face region, determining at least one first light falling on the face according to the face region and the at least one human face specific characteristic, and blending in the virtual object on a display device in the view of the real environment according to at least one first light.
    Type: Grant
    Filed: March 25, 2015
    Date of Patent: October 23, 2018
    Assignee: Apple Inc.
    Inventors: Sebastian Knorr, Peter Meier
  • Patent number: 10102331
    Abstract: Product Data Management (PDM) systems and methods. A method includes receiving a target body and a tool body, and evaluating a body type of the target body and a body type of the tool body. The method includes evaluating interactions between the target body and the tool body, and applying comparison criteria to determine spatial relation and relative convexity of an intersection between the target body and the tool body. The method includes identifying tool face regions of the tool body based on the evaluations and the determined spatial relation and relative convexity of the intersection. The method includes adding the tool face regions to the target body to produce a modified target body.
    Type: Grant
    Filed: August 1, 2012
    Date of Patent: October 16, 2018
    Assignee: Siemens Product Lifecycle Management Software Inc.
    Inventors: Eric Mawby, Feng Yu, Hui Qin
  • Patent number: 10097857
    Abstract: A method for encoding a LUT defined as a lattice of vertices is disclosed. At least one value is of each vertex of the lattice. The method comprises for a current vertex: predicting the at least one value of said current vertex from another value which is for example obtained from reconstructed values of neighboring vertices; and encoding in a bitstream at least one residue computed between the at least one value of the current vertex and its prediction in a bitstream.
    Type: Grant
    Filed: March 17, 2014
    Date of Patent: October 9, 2018
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Philippe Bordes, Pierre Andrivon, Emmanuel Jolly
  • Patent number: 10096150
    Abstract: A tiling unit assigning primitives to tiles in a graphics processing system which has a rendering space subdivided into a plurality of tiles. Each tile can comprise one or more polygonal region. Mesh logic of the tiling unit can determine that a plurality of primitives form a mesh and can determine whether the mesh entirely covers a region. If the mesh entirely covers the region then a depth threshold for the region can be updated so that subsequent primitives which lie behind the depth threshold are culled (i.e. not included in the display list for a tile). This helps to reduce the number of primitive IDs included in a display list for a tile which reduces the amount of memory used by the display list and reduces the number of primitives which a hidden surface removal (HSR) module needs to fetch to perform HSR on the tile.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: October 9, 2018
    Assignee: Imagination Technologies Limited
    Inventor: Xile Yang
  • Patent number: 10089562
    Abstract: When time required for print processing is estimated, estimation in consideration of overlap between objects is performed in such a manner that the objects are regarded as objects with a predetermined simple shape.
    Type: Grant
    Filed: September 6, 2016
    Date of Patent: October 2, 2018
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Hiroyuki Nakane
  • Patent number: 10089774
    Abstract: The disclosed techniques includes generating an input visibility stream for each tile of a frame, the input visibility stream indicating whether or not an input primitive is visible in each tile when rendered, and generating an output visibility stream for each tile of the frame, the output visibility stream indicating whether or not an output primitive is visible in each tile when rendered, wherein the output primitive is produced by tessellating the input primitive. In this way, based on the input visibility stream, tessellation may be skipped for entire input primitive that is not visible in the tile. Also, based on the output visibility stream, tessellation may be skipped for certain ones of the output primitives that are not visible in the tile, even if some of the input primitive is not visible.
    Type: Grant
    Filed: November 16, 2011
    Date of Patent: October 2, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Kiia Kaappoo Kallio, Jukka-Pekka Arvo
  • Patent number: 10089767
    Abstract: A method and system for processing light sources. A base photographic image of a scene is combined with N additional photographic images of the scene to form a composite image including M discrete light sources (N?2; M?N). The scene in the base image is exposed to ambient light. The scene of the base image is exposed, in each of the N additional images, to the ambient light and to at least one discrete light source to which the base image is not exposed. The M discrete light sources in the composite image include the discrete light sources to which the scene is exposed in the N additional images. The composite image is displayed on a display device, depicting a region surrounding each discrete light source and having an area that correlates with an intensity of light from the discrete light source surrounded by the region.
    Type: Grant
    Filed: May 25, 2016
    Date of Patent: October 2, 2018
    Assignee: International Business Machines Corporation
    Inventors: John F. Kelley, Douglas E. Lhotka, Kristin S. Moore, Todd P. Seager
  • Patent number: 10083541
    Abstract: Rendering systems that can use combinations of rasterization rendering processes and ray tracing rendering processes are disclosed. In some implementations, these systems perform a rasterization pass to identify visible surfaces of pixels in an image. Some implementations may begin shading processes for visible surfaces, before the geometry is entirely processed, in which rays are emitted. Rays can be culled at various points during processing, based on determining whether the surface from which the ray was emitted is still visible. Rendering systems may implement rendering effects as disclosed.
    Type: Grant
    Filed: March 11, 2015
    Date of Patent: September 25, 2018
    Assignee: Imagination Technologies Limited
    Inventors: Jens Fursund, Luke T Peterson
  • Patent number: 10062208
    Abstract: An interactive virtual world having virtual display devices and avatars. Scenes in the virtual world as seen by the eyes of the avatars are presented on the user devices controlling the avatars. Media contents are played in the virtual display devices presented on the user devices, as if the media contents were virtually played in the virtual world and observed by the avatars. Real time communication channels are provided among the user devices to facilitate voice communications during the sharing of the experiences of viewing the media content in a close proximity setting in the virtual world using user devices that are remote to each other in real world.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: August 28, 2018
    Assignee: CINEMOI NORTH AMERICA, LLC
    Inventor: Daphna Davis Edwards Ziman
  • Patent number: 10055893
    Abstract: A method and device for rendering a scene including one or more real objects is described. A virtual object is associated with each real object, a virtual object associated with a real object corresponding to a virtual replica of this real object. The virtual replica is used to render a transformation that may be applied to the real object when for example hit by a virtual object, the virtual object then replacing the corresponding real object within the scene. To bring realism to the scene, texture information obtained from image(s) of the real object(s) is used to texture the visible part(s) of the transformed virtual object(s). The texture information is selected in the images by using information on the visibility of the parts of the real object(s) that correspond to the visible parts of the transformed virtual object(s).
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: August 21, 2018
    Assignee: THOMSON LICENSING
    Inventors: Matthieu Fradet, Philippe Robert, Anthony Laurent
  • Patent number: 10055892
    Abstract: Some augmented reality (AR) and virtual reality (VR) applications may require that an “activity region” be defined prior to their use. For example, a user performing a video conferencing application or playing a game may need to identify an appropriate space in which they may walk and gesture while wearing a Head Mounted Display without causing injury. This may be particularly important in VR applications where, e.g., the user's vision is completely obscured by the VR display, and/or the user will not see their actual environment as the user moves around. Various embodiments provide systems and methods for anticipating, defining, and applying the active region. In some embodiments, the system may represent real-world obstacles to the user in the user's field of view, e.g., outlining the contour of the problematic object to call the user's attention to the object's presence in the active region.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: August 21, 2018
    Assignee: Eonite Perception Inc.
    Inventors: Anna Petrovskaya, Peter Varvak, Anton Geraschenko, Dylan Koenig, Youssri Helmy
  • Patent number: 10049303
    Abstract: Methods and a system for identifying reflective surfaces in a scene are provided herein. The system may include a sensing device configured to capture a scene. The system may further include a storage device configured to store three-dimensional positions of at least some of the objects in the scene. The system may further include a computer processor configured to attempt to obtain a reflective surface representation for one or more candidate surfaces selected from the surfaces in the scene. In a case that the attempted obtaining is successful, computer processor is further configured to determine that the candidate reflective surface is indeed a reflective surface defined by the obtained surface representation. According to some embodiments of the present invention, in a case the attempted calculation is unsuccessful, determining that the recognized portion of the object is an object that is independent of the stored objects.
    Type: Grant
    Filed: October 1, 2015
    Date of Patent: August 14, 2018
    Assignee: Infinity Augmented Reality Israel Ltd.
    Inventors: Matan Protter, Motti Kushnir, Felix Goldberg
  • Patent number: 10043306
    Abstract: A tile-based graphics processing system generates a render output by dividing it into a plurality of larger patches, each of which encompass a set of smaller patches. A rasterizer tests primitives against patches of the render output. When a primitive is found to completely cover a larger patch, depth function data for that primitive is stored in an entry of a depth buffer in respect of that largest patch position. When a subsequently-processed primitive is found to cover that same larger patch, the depth function data stored in the buffer is used to calculate depth range values for smaller patches that the larger patch encompasses. These depth range values, representative of the first primitive, are used to perform depth tests in respect of the second primitive. The depth function data stored in entry is then marked as invalid in respect of the smaller patches.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: August 7, 2018
    Assignee: Arm Limited
    Inventors: Marko Johannes Isomäki, Christian Vik Grovdal
  • Patent number: 10026216
    Abstract: A graphics data processing method and apparatus are disclosed. The graphics data processing method includes determining a guard band region having a distance range which is predetermined in a viewing direction from a position of a virtual camera, outside a virtualization region representing regions of objects able to be displayed on a screen among a plurality of objects included in graphics data. The method further includes acquiring position information of each of the plurality of objects, determining a region where at least one object among the plurality of objects is located, based on the acquired position information, and performing at least one of clipping and culling on data of the at least one object, based on the determined region.
    Type: Grant
    Filed: September 3, 2015
    Date of Patent: July 17, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seokyoon Jung, Jeongsoo Park
  • Patent number: 10009127
    Abstract: A system and method for ray launching in electromagnetic wave propagation modeling. A data-processing system receives a dataset that is representative of one or more structures within an environment, including a structure that is defined in the dataset as having at least a first surface. The data-processing system establishes a bounding box that is representative of the first surface and partitions at least a portion of the bounding box into a first set of tiles. The data-processing system then projects a first set of ray tubes from a predetermined point within the environment, to the tiles in the first set of tiles. Each ray tube in the first set of ray tubes is defined by a corresponding tile in the first set of tiles. The data-processing system evaluates the incidence of bounced ray tubes at a predetermined receive point within the environment and presents a propagation result that is based on the evaluated incidence.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: June 26, 2018
    Assignees: Polaris Wireless, Inc., Alma Mater Studiorum—Università di Bologna
    Inventors: Jonathan Shiao-en Lu, Vittorio Degli-Esposti, Enrico Maria Vitucci
  • Patent number: 9998655
    Abstract: Disclosed is a method and apparatus for providing visual guidance to a user capturing images of a three-dimensional object. In one embodiment, the operations implemented may include: generating a virtual registered sphere comprising a plurality of contiguous patches, wherein each of the plurality of patches corresponds to a continuous range of image capture angles; rendering at least a portion of the virtual registered sphere in an image capture camera view; determining whether images of the three-dimensional object have been captured to a predetermined satisfactory degree from a particular range of image capture angles associated with a particular patch; and assigning a color to the particular patch based at least in part on the determination of whether images of the three-dimensional object have been captured to the predetermined satisfactory degree from the particular range of image capture angles associated with the particular patch.
    Type: Grant
    Filed: April 3, 2015
    Date of Patent: June 12, 2018
    Assignee: QUALLCOMM Incorporated
    Inventors: Johannes Scharl, Irene Reisner-Kollmann, Zsolt Szalavari
  • Patent number: 9984491
    Abstract: Provided is a method of managing commands, which includes receiving a frame buffer object (FBO) change command, comparing an FBO designated by the FBO change command with a FBO currently processed by a graphics processing unit (GPU) to determine whether the two FBOs are the same as each other, and managing the FBO change command or a flush command based on a result of the comparison.
    Type: Grant
    Filed: November 4, 2014
    Date of Patent: May 29, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sangoak Woo, Jeongae Park, Minkyu Jeong, Minyoung Son, Seokyoon Jung, Jeongwook Kim, Soojung Ryu
  • Patent number: 9973739
    Abstract: Joint coding of depth map video and texture video is provided, where a motion vector for a texture video is predicted from a respective motion vector of a depth map video or vice versa. For scalable video coding, depth map video is coded as a base layer and texture video is coded as an enhancement layer(s). Inter-layer motion prediction predicts motion in texture video from motion in depth map video. With more than one view in a bit stream (for multi view coding), depth map videos are considered monochromatic camera views and are predicted from each other. If joint multi-view video model coding tools are allowed, inter-view motion skip issued to predict motion vectors of texture images from depth map images. Furthermore, scalable multi-view coding is utilized, where interview prediction is applied between views in the same dependency layer, and inter-layer (motion) prediction is applied between layers in the same view.
    Type: Grant
    Filed: October 16, 2009
    Date of Patent: May 15, 2018
    Assignee: Nokia Technologies Oy
    Inventors: Ying Chen, Miska Hannuksela
  • Patent number: 9962141
    Abstract: Disclosed herein is an image processing apparatus. The image processing apparatus collects volume data which relates to an object, generates volume-rendered image data from the collected volume data, acquires a projection image of the object at a position at which virtual illumination is emitted toward the object, based on the volume-rendered image data, and corrects the projection image by using at least one conversion function, thereby obtaining a result image.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: May 8, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Yun Tae Kim
  • Patent number: 9959903
    Abstract: A video playback method and a video playback apparatus are provided. The object path extraction module of the video playback apparatus extracts at least one object path from an original video. The video synthesizing module of the video playback apparatus selectively adjusts said object path, so as to synthesize the object path into the synthesis video. The video synthesizing module determines the time length of the synthesis video based on the playback time length set by user, wherein the time length of the synthesis video less than the time length of the original video.
    Type: Grant
    Filed: April 16, 2015
    Date of Patent: May 1, 2018
    Assignee: QNAP SYSTEMS, INC.
    Inventor: Chun-Yen Chen
  • Patent number: 9958287
    Abstract: Based on map information acquired from a map DB, a viewpoint for viewing a ground surface on a map of a set region at a time of displaying the map is set. Altitude information that indicates an altitude of a landform present in at least a partial region of the set region is stored. In a case where the altitude information is present in the map DB at a position on the map, which is set in response to a position indicated by inputted position information, a sight direction of the viewpoint is changed, and the viewpoint is thereby set at a position higher than the altitude of the landform, which is indicated by altitude information of the position on the map. A display data for displaying, on a display device, a map in a case of viewing the ground surface from the viewpoint set is generated.
    Type: Grant
    Filed: July 18, 2014
    Date of Patent: May 1, 2018
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventor: Masafumi Asai
  • Patent number: 9952842
    Abstract: A computer system for compiling a source program into an object program includes a graphics processor having a pre-processing core and a post-processing core, and a processor configured to execute a compiler to convert the source program into an intermediate program including a target variable to be processed by the pre-processing core, generate a modified-intermediate program from the intermediate program by eliminating the target variable among variables included in the intermediate program and convert the modified-intermediate program into the object program including the target variable to be processed by the post-processing core.
    Type: Grant
    Filed: December 8, 2015
    Date of Patent: April 24, 2018
    Assignee: Samsung Electronics Co., Ltd
    Inventors: In-Ho Lee, I Saac Hong