Patents Examined by Edward Martello
  • Patent number: 10565770
    Abstract: The invention relates to a communication system and a method for providing a virtual meeting of a first user (U1, U2, U3, U4) and a second user (U1, U2, U3, U4), comprising a first communication device (12, 14, 16, 18, 24, 26, 28, 32, 34) with a first display device (12a, 14a, 16a, 18a, 24a, 26a, 28a, 32a) associated with the first user (U1, U2, U3, U4), and a second communication device (12, 14, 16, 18, 24, 26, 28, 32, 34) with a second display device (12a, 14a, 16a, 18a, 24a, 26a, 28a, 32a) associated with the second user (U1, U2, U3, U4).
    Type: Grant
    Filed: August 5, 2016
    Date of Patent: February 18, 2020
    Assignee: Apple Inc.
    Inventor: Eberhard Schmidt
  • Patent number: 10559123
    Abstract: Aspects of this disclosure relate to a process for rendering graphics that includes designating a hardware shading unit of a graphics processing unit (GPU) to perform first shading operations associated with a first shader stage of a rendering pipeline. The process also includes switching operational modes of the hardware shading unit upon completion of the first shading operations. The process also includes performing, with the hardware shading unit of the GPU designated to perform the first shading operations, second shading operations associated with a second, different shader stage of the rendering pipeline.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: February 11, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Vineet Goel, Andrew Evan Gruber
  • Patent number: 10559126
    Abstract: Method and apparatus for encoding, decoding and rendering 3D media content are provided. An apparatus for rendering three-dimensional (3D) media content includes a communication interface configured to receive a multimedia stream, and one or more processors operably coupled to the communication interface, the one or more processors configured to parse the multimedia stream into 2D video bitstreams including geometry frames and texture frames, 2D to 3D conversion metadata for rendering 3D points from 2D frames, and scene description metadata describing 6 degree of freedom (6DoF) relationships among objects in a 6DoF scene, decode the 2D video streams including geometry data and texture data to generate 2D pixel data, covert the 2D pixel data into 3D voxel data using the 2D to 3D conversion metadata; and generate the 6DoF scene from 3D voxel data using the scene description metadata.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: February 11, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Imed Bouazizi, Youngkwon Lim
  • Patent number: 10546418
    Abstract: Embodiments relate to visualization of positional geospatial uncertainty. Initially, a map image request for geographic features is received from a client computing device, where the map image request includes an uncertainty type, a distribution shape, and a selected visualization technique. An uncertainty buffer pixel size is determined based on a geographic distance covered by the distribution shape.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: January 28, 2020
    Assignee: The Government of the United States of America, as represented by the Secretary of the Navy
    Inventors: Brent Barré, Elias Ioup, John Sample
  • Patent number: 10535196
    Abstract: Technologies are described for indicating a geographic origin of a digitally-mediated communication relative to a location of a recipient by presenting the indication in an augmented reality scene. For example, an augmented reality scene can be presented to the recipient. The geographic origin of an incoming digital communication may be determined and a relative location of the origin with respect to the recipient's location may be computed. A format for presenting the relative location may be derived from the digital communication and the geographic origin. The augmented reality scene may be updated with the relative location based on the derived format.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: January 14, 2020
    Assignee: Empire Technology Development LLC
    Inventors: Mark Malamud, Royce Levien
  • Patent number: 10534530
    Abstract: A method implemented in a device including a position input sensor is provided, for generating ink data including stroke objects that are vector data configured to reproduce paths formed by operating a pointer. The method includes generally three step. The first step receives pen event data representative of a user's hand-drawn motion on a sensor surface. The second step generates a stroke object based on the pen event data, generates a metadata object that describes the stroke object based on the pen event data and context information received from an application supporting the pen event data, and generates a drawing style object that defines how to draw the stroke object based on the pen event data and the context information. The third step outputs the stroke object, the metadata object, and the drawing style object in association with each other in a recording format or in a transmission format.
    Type: Grant
    Filed: April 14, 2016
    Date of Patent: January 14, 2020
    Assignee: Wacom Co., Ltd.
    Inventors: Branimir Angelov, Stefan Yotov, Heidi Wang, Plamen Petkov
  • Patent number: 10521247
    Abstract: Techniques are disclosed for advantageously relocating graphical digital content on a screen of a client device. The technique can include displaying a dynamic content at a particular location on the screen of a client device (e.g., inline with an article displayed on a webpage). Upon determination that a relocation condition exists (e.g., a viewability of the inline dynamic content drops below a particular threshold), the dynamic content can be relocated to another location on the screen. In some instances, rather than being relocated to a static, previously determined position, the dynamic content can be dynamically relocated, based on the location of the other content.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: December 31, 2019
    Assignee: XANDR INC.
    Inventors: Jeffrey Weiss, Vikki Pitts, Alexander Krassel, Radhika Shivapurkar, Kyungsuk Song
  • Patent number: 10509619
    Abstract: A method includes receiving an image of a product, obtaining content relevant to using the product, and displaying the content in an augmented reality view of the product by overlaying the content on the image of the product on a display device.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: December 17, 2019
    Assignee: HAND HELD PRODUCTS, INC.
    Inventors: Erik Todeschini, James Timothy Sauerwein, Jr., Donald Anderson
  • Patent number: 10497159
    Abstract: A computing device obtains information associated with a computer aided design (CAD) model of an object, and also determines how an illustration of the object would be utilized. Based on that information, the computing device automatically generates views of the object for inclusion in technical publications related to the object, or a system that includes the object.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: December 3, 2019
    Assignee: The Boeing Company
    Inventors: John W. Glatfelter, Stuart A. Galt, Raymond C. Sharp, III
  • Patent number: 10497165
    Abstract: Texturing of external and/or internal surfaces, or on internal parts of 3D models representing real objects, for providing extremely real-like, vivid and detailed view on and/or within the 3D-model, is made possible using a plurality of real photographs and/or video of the real objects. The 3D models are 3D computer graphics models used in user-controlled interactions implementation purpose. The view of texture on the 3D-model that is textured using real photographs and/or video replicates view of texture as on the real 3D object. Displaying realistic texture on 3D-model surface applying video as texture is made possible replicating real view of light blinking from a physical light emitting device of real object such as head light or rear light of an automotive vehicle.
    Type: Grant
    Filed: March 15, 2014
    Date of Patent: December 3, 2019
    Inventors: Nitin Vats, Gaurav Vats
  • Patent number: 10489959
    Abstract: Certain embodiments involve automatically generating a layered animatable puppet using a content stream. For example, a system identifies various frames of a content stream that includes a character performing various gestures usable for generating a layered puppet. The system separates the various frames of the content stream into various individual layers. The system extracts a face of the character from the various individual layers and creates the layered puppet by combining the individual layers and using the face of the character. The system can output the layered puppet for animation to perform a gesture of the various gestures.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: November 26, 2019
    Assignee: Adobe Inc.
    Inventors: David Simons, Jakub Fiser
  • Patent number: 10482661
    Abstract: An embodiment of the present invention provides a computer-implemented method for displaying one or more augmented reality (AR) objects on a transparent display device, comprising: associating data of one or more transparent areas corresponding to one or more real objects with a first layer for seeing one or more real objects on the transparent display device, wherein the data has information on a location and shape of the one or more real objects on the transparent display device, associating one or more AR objects with a second layer for displaying one or more AR objects on the transparent display device; and overlaying the first layer with the second layer to display the one or more AR objects on the transparent display device, wherein the one or more real objects are seen through the one or more transparent areas on the transparent display device by transparent area the transparent display device.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: November 19, 2019
    Assignee: International Business Machines Corporation
    Inventors: Taku Aratsu, Toshiyuki Komoda, Yohei Umehara, Satoshi Yokoyama
  • Patent number: 10445931
    Abstract: A system and method directionally dilate texture onto mesh seams of a laid-out mesh of a three-dimensional image to reduce image artifacts arising from traditional omni-directional dilating. The dilation direction may be determined for a border pixel of a laid-out mesh based at least in part on one or more vertices of the laid-out mesh. Dilation directions determined for mesh border pixels may be encoded onto one or more data channels associated with the corresponding border pixels. The dilation directions at each of the border pixels may be used to incrementally dilate texture onto a predetermined number of pixels of border seams until the entirety of the border seam pixels are dilated.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: October 15, 2019
    Assignee: Electronic Arts, Inc.
    Inventors: Pawel Piotr Wrotek, Darren Douglas Gyles
  • Patent number: 10438412
    Abstract: A method includes specifying a position of a virtual object based on a position of a map point that is defined in a first map and indicates three-dimensional coordinates of a feature point, correcting the position of the virtual object based on positions of a plurality of map points defined in a second map and a capturing direction of a camera when the first map is changed to the second map that is different from the first map, and controlling a display to display the virtual object, based on the corrected position of the virtual object and an image captured by the camera.
    Type: Grant
    Filed: April 20, 2017
    Date of Patent: October 8, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Atsunori Moteki, Nobuyasu Yamaguchi, Toshiyuki Yoshitake
  • Patent number: 10423220
    Abstract: According to an embodiment, a virtual try-on apparatus includes a first acquisition unit, a first display controller, an acceptor, a generator, and a second display controller. The first acquisition unit is configured to acquire characteristic information on a try-on subject. The first display controller is configured to display on a first display, clothing images corresponding to the acquired characteristic information in first information in which the characteristic information and the clothing images are associated with each other. The acceptor is configured to accept from the try-on subject a selection of an image of clothing to be tried on from among the clothing images displayed on the first display. The generator is configured to generate a composite image of a try-on subject image of the try-on subject and the selected clothing image. The second display controller is configured to display the composite image on a second display.
    Type: Grant
    Filed: March 30, 2015
    Date of Patent: September 24, 2019
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA SOLUTIONS CORPORATION
    Inventors: Kunio Osada, Toshimasa Dobashi, Hisao Yoshioka, Shigeru Mikami
  • Patent number: 10388058
    Abstract: Systems, methods, apparatuses, and software for graphics processing systems in computing environments are provided herein. In one example, a method of handling tiled resources in graphics processing environments is presented. The method includes establishing, in a graphics processing unit, a residency map having values determined from memory residency properties of a texture resource, and sampling from the residency map at a specified location to determine a residency map sample for the texture resource at the specified location, where the residency map sample indicates at least an initial level of detail presently resident and a smoothing component to reach a next level of detail.
    Type: Grant
    Filed: May 30, 2017
    Date of Patent: August 20, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James Andrew Goossen, Matthew William Lee, Mark S. Grossman
  • Patent number: 10380791
    Abstract: Methods for generating ground shadow and lighting effects for three-dimensional models include identifying polygon data for a three-dimensional model, generating a convex polygon around a base of the model, generating hard and soft shadow meshes in and around the base of the model, and rendering the model with the shadow meshes with a display device. Methods for generating wall shadows and lighting effects for the three dimensional models further include identifying an orientation and height of a polygon in the model that extends from a ground surface in a virtual environment, and rendering the model with a lighting texture applied to either the full polygon if the polygon height is less than a threshold height or to only a portion of the polygon below the threshold height if the polygon exceeds the threshold height.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: August 13, 2019
    Assignee: Robert Bosch GmbH
    Inventors: Lincan Zou, Liu Ren
  • Patent number: 10352693
    Abstract: A method and system determining texture data at least by performing fiducial point registration with a triggerable texture source and generating virtual objects and texture data based at least in part upon a fiducial point. Fiducial point registration textures a desired point with one or more projectors in response to the identification of a desired point in space, generates data with one or more projected patterns, determines a final location or depth of a point using the generated data, and registers the point as a fiducial point by groom a data structure based on a registration requirement pertaining to a distribution of fiduciary points and a requirement pertaining to machine recognition of physical objects. Virtual objects are generated relative to a real-world scene and the generated texture data-based at least in part upon the final location or depth of a fiducial point.
    Type: Grant
    Filed: May 8, 2015
    Date of Patent: July 16, 2019
    Assignee: Magic Leap, Inc.
    Inventors: Rony Abovitz, Brian T. Schowengerdt, Mathew D. Watson
  • Patent number: 10339717
    Abstract: A multiuser, collaborative augmented reality (AR) system employs individual AR devices for viewing real-world anchors, that is, physical models that are recognizable to the camera and image processing module of the AR device. To mitigate ambiguous configurations when used in the collaborative mode, each anchor is registered with a server to ensure that only uniquely recognizable anchors are simultaneously active at a particular location. The system permits collaborative AR to span multiple sites, by associating a portal with an anchor at each site. Using the location of their corresponding AR device as a proxy for their position, AR renditions of the other participating users are provided. This AR system is particularly well suited for games.
    Type: Grant
    Filed: October 1, 2017
    Date of Patent: July 2, 2019
    Inventors: Jordan Kent Weisman, William Gibbens Redmann
  • Patent number: 10325411
    Abstract: A navigation system provides pose, i.e., location and orientation, solutions using best available location information from two or more location systems. One of the location systems is a fiducial-based location system, which is accurate when a sufficient number of fiducials is recognized. However, when an insufficient number of fiducials is recognized, an odometry-based location system is used. Although the odometry-based location system is subject to drift, when a sufficient number of fiducials is recognized, the fiducial-based location system is used to correct the odometry-based location system. The navigation system provides robust, accurate and timely pose solutions, such as for augmented reality (AR) or virtual reality (VR) systems, without the time-consuming requirement to establish and localize many fiducials or the computational and memory requirements of pure fiducial-based location systems, and without the inherent drift of pure odometry-based location systems.
    Type: Grant
    Filed: February 1, 2018
    Date of Patent: June 18, 2019
    Assignee: The Charles Stark Draper Laboratory, Inc.
    Inventors: James Laney, Richard W. Madison, Robert Truax, Theodore J. Steiner, III, Eric Jones