Patents Examined by Barry Drennan
  • Patent number: 9978176
    Abstract: A mesh simplification system receives three-dimensional (3D) polygonal mesh of a 3D object. The mesh simplification system identifies a component of the (3D) polygonal mesh, having a first surface area size being less than a second surface area size of the 3D polygonal mesh, wherein the component comprises a set of topologically interconnected surfaces that are modeled as a separate structure from the 3D polygonal mesh. The mesh simplification system then automatically generates a simplified version of the component by removing a back surface from the component, wherein the simplified version the component comprises fewer polygons than the component.
    Type: Grant
    Filed: February 10, 2016
    Date of Patent: May 22, 2018
    Assignee: ELECTRONIC ARTS INC.
    Inventor: Talan Le Geyt
  • Patent number: 9978165
    Abstract: Methods, computer systems, and computer storage media are provided for automatically populating a central monitor perspective with waveform tracings having a predetermined aspect ratio. A selection of a unit location is received, and monitoring devices connected to patients at the unit location are detected. Waveform tracings associated with the active monitoring devices are presented in a predetermined aspect ratio in the central monitor perspective. As new monitoring devices are connected to patients, or as monitoring devices are disconnected from patients, the central monitor perspective is automatically refreshed to reflect currently active waveform tracings having the predetermined aspect ratio.
    Type: Grant
    Filed: July 31, 2012
    Date of Patent: May 22, 2018
    Assignee: Cerner Innovation, Inc.
    Inventors: Samir Muranjan, James Alexander Moseley, Gregory James Kuttenkuler, Mark R. Inman, Jill Ann Meier
  • Patent number: 9977782
    Abstract: A system, method, and device for creating an environment and sharing an experience using a plurality of mobile devices having a conventional camera and a depth camera employed near a point of interest. In one form, random crowdsourced images, depth information, and associated metadata are captured near said point of interest. Preferably, the images include depth camera information. A wireless network communicates with the mobile devices to accept the images, depth information, and metadata to build and store a 3D model of the point of interest. Users connect to this experience platform to view the 3D model from a user selected location and orientation and to participate in experiences with, for example, a social network.
    Type: Grant
    Filed: June 17, 2015
    Date of Patent: May 22, 2018
    Inventors: Charles D. Huston, Chris Coleman
  • Patent number: 9978161
    Abstract: A system including at least one apparatus creates a representation of a road geometry from a plurality of sets of data, each set of data including at least an indication of a position and an indication of a heading of a mobile object. The system detects an intersection in the representation of the road geometry. The system defines at least one Bézier curve representing a trajectory between an entry to and an exit from the intersection. The system replaces at least a part of the created representation of the road geometry for the intersection by the at least one defined Bézier curve.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: May 22, 2018
    Assignee: HERE Global B.V.
    Inventor: Ole Henry Dorum
  • Patent number: 9977981
    Abstract: Provided are methods and apparatuses for calibrating a three-dimensional (3D) image in a tiled display including a display panel and a plurality of lens arrays. The method includes capturing a plurality of structured light images displayed on the display panel, calibrating a geometric model of the tiled display based on the plurality of structured light images, generating a ray model based on the calibrated geometric model of the tiled display, and rendering an image based on the ray model.
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: May 22, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Weiming Li, Mingcai Zhou, Shandong Wang, Shaohui Jiao, Tao Hong, Haitao Wang, Ji Yeun Kim
  • Patent number: 9972128
    Abstract: A method for generating a polycube representation of an input object comprises: receiving an input volumetric representation of the input object; deforming the input volumetric representation to provide a deformed object representation; and extracting, by the processor, a polycube representation of the object from the deformed object representation. Deforming the input volumetric representation to provide the deformed object representation comprises effecting a tradeoff between competing objectives of: deforming the input volumetric representation in a manner which provides surfaces having normal vectors closely aligned with one of the six directions aligned with the set of global Cartesian axes; and deforming the input volumetric representation in a manner which provides low-distortion deformations. Deforming the input volumetric representation to provide the deformed object may be performed iteratively.
    Type: Grant
    Filed: July 22, 2013
    Date of Patent: May 15, 2018
    Assignees: The University of British Columbia, Oregon State University
    Inventors: James Gregson, Alla Sheffer, Eugene Zhang
  • Patent number: 9973723
    Abstract: A method and system for adaptively mixing video components with graphics/UI components, where the video components and graphics/UI components may be of different types, e.g., different dynamic ranges (such as HDR, SDR) and/or color gamut (such as WCG). The mixing may result in a frame optimized for a display device's color space, ambient conditions, viewing distance and angle, etc., while accounting for characteristics of the received data. The methods include receiving video and graphics/UI elements, converting the video to HDR and/or WCG, performing statistical analysis of received data and any additional applicable rendering information, and assembling a video frame with the received components based on the statistical analysis. The assembled video frame may be matched to a color space and displayed. The video data and graphics/UI data may have or be adjusted to have the same white point and/or primaries.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: May 15, 2018
    Assignee: Apple Inc.
    Inventors: Haitao Guo, Kenneth I. Greenebaum, Guy Cote, David W. Singer, Alexandros Tourapis
  • Patent number: 9965064
    Abstract: An apparatus, a method and a non-transitory computer readable medium is provided. The apparatus includes: at least one processor; and at least one memory storing computer program instructions configured, working with the at least one processor, to cause the apparatus to perform at least the following: detecting bending of a flexible auto-stereoscopic display comprising a parallax barrier arrangement; and compensating for movement of the parallax barrier arrangement, caused by the bending of the flexible auto-stereoscopic display, by adjusting one or more characteristics of the flexible auto-stereoscopic display in dependence upon the bending of the flexible auto-stereoscopic display.
    Type: Grant
    Filed: October 5, 2012
    Date of Patent: May 8, 2018
    Assignee: Nokia Technologies Oy
    Inventors: Tero Rissa, Aki Happonen
  • Patent number: 9965884
    Abstract: Methods and devices for determining scoring models of a three-dimensional animation scene frame are provided.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: May 8, 2018
    Assignee: BEIJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONS
    Inventors: Liang Liu, Huadong Ma, Zeyu Wang, Dawei Lu
  • Patent number: 9965471
    Abstract: A system and method for capturing a location based experience at an event including a plurality of mobile devices having a camera employed near a point of interest to capture random, crowdsourced images and associated metadata near said point of interest. In a preferred form, the images include depth camera information from prepositioned devices around the point of interest during the event. A network communicates images, depth information, and metadata to build a 3D model of the region, preferably with the location of contributors known. Users connect to this experience platform to view the 3D model from a user selected location and orientation and to participate in experiences with, for example, a social network.
    Type: Grant
    Filed: June 17, 2015
    Date of Patent: May 8, 2018
    Inventors: Charles D. Huston, Chris Coleman
  • Patent number: 9965876
    Abstract: A graphics processing pipeline determines whether respective graphics processing operations, such as respective blends, respective depth tests, etc., to be performed at a stage of the graphics processing pipeline would produce the same result for each sampling point of a set of plural sampling points represented by a fragment being processed by the graphics processing pipeline. If it is determined that respective graphics processing operations would produce the same result for each of the sampling points, then only a single instance of the graphics processing operation is performed and the result of that graphics processing operation is associated with each of the sampling points. The number of instances of the graphics processing operations needed to process the set of plural sampling points which the fragment represents is reduced in comparison to conventional multisampling graphics processing techniques which perform graphics processing operations for fragments on a “per sample” basis.
    Type: Grant
    Filed: March 18, 2013
    Date of Patent: May 8, 2018
    Assignee: Arm Limited
    Inventors: Andreas Engh Halstvedt, Sean Tristram Ellis, Jorn Nystad, Sandeep Kakarlapudi
  • Patent number: 9959655
    Abstract: A method, system, and computer program product provide the ability to render an animated creature in real-time. A creature diagram for a creature, having chains sections, is drawn. An effector is created for each section and each chain, and defines a target position and an orientation that is reached. A chain solving type is selected for the chains, and is used to simulate a desired biomechanical behavior of the creature. The creature diagram, including the sections, chains, and chain solving types, is mapped to a three-dimensional (3D) model. The creature is animated/rendered in a real-time 3D application based on the mapping.
    Type: Grant
    Filed: April 15, 2016
    Date of Patent: May 1, 2018
    Assignee: Autodesk, Inc.
    Inventor: Alain Baur
  • Patent number: 9947098
    Abstract: A solution for generating a 3D representation of an object in a scene is provided. A depth map representation of the object is combined with a reflectivity map representation of the object to generate the 3D representation of the object. The 3D representation of the object provides more complete and accurate information of the object. An image of the object is illuminated by structured light and is captured. Pattern features rendered in the captured image of the object are analyzed to derive a depth map representation and a reflectivity map representation of the illuminated object. The depth map representation provides depth information while the reflectivity map representation provides surface information (e.g., reflectivity) of the illuminated object. The 3D representation of the object can be enhanced with additional illumination projected onto the object and additional images of the object.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: April 17, 2018
    Assignee: Facebook, Inc.
    Inventors: Nitay Romano, Nadav Grossinger
  • Patent number: 9947129
    Abstract: A method for rendering volume radiographic image content of a subject forms a volume image. The method extracts a first image slice from the volume image, then modifies the extracted first image slice by defining two or more spatial frequency bands from the image slice data and applying one or more viewer adjustments to the image slice data, wherein the one or more viewer adjustments condition the image data to enhance image content in at least one of the defined spatial frequency bands. A set of display rendering parameters is generated according to the two or more frequency bands and according to viewer adjustments made for the first image slice. A second image slice is extracted from the volume image. The generated set of display rendering parameters is applied to the second image slice to render an adjusted image slice and the adjusted image slice is displayed.
    Type: Grant
    Filed: March 26, 2014
    Date of Patent: April 17, 2018
    Assignee: Carestream Health, Inc.
    Inventors: Lori L. Barski, Mary E. Couwenhoven
  • Patent number: 9937022
    Abstract: A three-dimensional model of an object is employed to aid in navigation among a number of images of the object taken from various viewpoints. In general, an image of an object such as a digital photograph is displayed in a user interface or the like. When a user selects a point within the display that corresponds to a location on the surface of the object, another image may be identified that provides a better view of the object. In order to maintain user orientation to the subject matter while navigating to this destination viewpoint, the display may switch to a model view and a fly-over to the destination viewpoint may be animated using the model. When the destination viewpoint is reached, the display may return to an image view for further inspection, marking, or other manipulation by the user.
    Type: Grant
    Filed: January 4, 2009
    Date of Patent: April 10, 2018
    Assignee: 3M INNOVATIVE PROPERTIES COMPANY
    Inventors: Ilya A. Kriveshko, Dmitriy A. Dedkov
  • Patent number: 9934611
    Abstract: Techniques are presented for constructing a digital representation of a physical environment. In some embodiments, a method includes obtaining image data indicative of the physical environment; receiving gesture input data from a user corresponding to at least one location in the physical environment, based on the obtained image data; detecting at least one discontinuity in the physical environment near the at least one location corresponding to the received gesture input data; and generating a digital surface corresponding to a surface in the physical environment, based on the received gesture input data and the at least one discontinuity.
    Type: Grant
    Filed: April 11, 2014
    Date of Patent: April 3, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Dieter Schmalstieg, Gerhard Reitmayr, Thanh Quoc Nguyen, Raphael David Andre Grasset, Tobias Martin Langlotz, Hartmut Seichter
  • Patent number: 9934615
    Abstract: An image processing system is designed to generate a canvas view that transitions between binocular views and monocular views. Initially, the image processing system receives top/bottom images and side images of a scene and calculates offsets to generate synthetic side images for left and right view of a user. To transition between binocular views and monocular views, the image processing system first warps top/bottom images onto corresponding synthetic side images to generate warped top/bottom images, which realizes the transition in terms of shape. The image processing system then morphs the warped top/bottom images onto the corresponding synthetic side images to generate blended images for left and right eye views with the blended images. The image processing system creates the canvas view which transitions between binocular views and monocular views in terms of image shape and color based on the blended images.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: April 3, 2018
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Forrest Samuel Briggs
  • Patent number: 9928649
    Abstract: A flight path of a physical aircraft vehicle is planned. A virtual three-dimensional model of a physical environment is provided. A placement indicator is tracked within the virtual three-dimensional model of the physical environment. Tracking the placement indicator includes tracking a location and an orientation of the placement indicator within the virtual three-dimensional model. A viewfinder display window that displays a simulated image captured from a simulated camera of a simulated vehicle located at the location of the placement indicator and oriented at a direction of the orientation of the placement indicator is provided. For the physical aircraft vehicle, at least a flight path and a camera image capture are planned using the placement indicator and the viewfinder display window within the virtual three-dimensional model.
    Type: Grant
    Filed: April 15, 2016
    Date of Patent: March 27, 2018
    Assignee: Amber Garage, Inc.
    Inventors: Botao Hu, Jiajie Zhang
  • Patent number: 9928626
    Abstract: An apparatus including an image processor configured to receive a video including an object, determine a positional relationship between the apparatus and the object, and change a positional relationship between an image superimposed on the video and the object when the positional relationship between the apparatus and the object changes.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: March 27, 2018
    Assignee: SONY CORPORATION
    Inventor: Shunichi Kasahara
  • Patent number: 9922399
    Abstract: A direction and distance of movement of a display device as well as of a user of the display device are determined. Based on these determined directions and distances of movement, compensation to apply to content displayed on the display device to compensate for movement of the user with respect to the device is determined and applied to the content. A portion of the display device at which the user is looking can also be detected. The compensation is applied to the content only if applying the compensation would not result in the portion being positioned beyond the display device. If applying the compensation would result in the portion being positioned beyond the display device then appropriate corrective action is taken, such as not applying the compensation to the content.
    Type: Grant
    Filed: July 25, 2016
    Date of Patent: March 20, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Robin Abraham, Andrew V. Fawcett