Patents Examined by Daniel Hajnik
  • Patent number: 9483865
    Abstract: Aspects comprise a ray shooting method based on the data structure of a uniform grid of cells, and on local stencils in cells. The high traversal and construction costs of accelerating structures are cut down. The object's visibility from the viewpoint and from light sources, as well as the primary workload and its distribution among cells, are gained in the preprocessing stage and cached in stencils for runtime use. In runtime, the use of stencils allows a complete locality at each cell, for load balanced parallel processing.
    Type: Grant
    Filed: September 7, 2014
    Date of Patent: November 1, 2016
    Assignee: ADSHIR LTD.
    Inventor: Reuven Bakalash
  • Patent number: 9485346
    Abstract: A method and apparatus for controlling a playback speed of an animation message in a mobile terminal is provided. The method includes recognizing at least one object to be displayed included in the received animation message; determining the playback speed of the received animation message with respect to each object to be displayed according to the recognized feature of each object; and displaying the animation message according to the determined playback speed.
    Type: Grant
    Filed: October 23, 2012
    Date of Patent: November 1, 2016
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Do-Hyeon Kim, Won-Suk Chang, Dong-Hyuk Lee, Seong-Taek Hwang
  • Patent number: 9472015
    Abstract: A business management system for visualizing transactional data objects in real time is provided. An example system accesses a stream of transactional data objects and generates a in a three-dimensional graphical paradigm. A viewer may manipulate the presentation of the transactional data objects by engaging gestures and visual controls that may be provided on a display screen.
    Type: Grant
    Filed: December 3, 2012
    Date of Patent: October 18, 2016
    Assignee: SAP SE
    Inventors: Vishal Sikka, Samuel J. Yen, Sanjay Rajagopalan, Jeong H. Kim
  • Patent number: 9460537
    Abstract: An apparatus designs an animation expressing a transition of display forms of an object. A reception unit receives an operation for setting key frames in the animation at one or more time points on a time axis displayed on a screen, the time axis corresponding to the animation, and instructs a display form of the object at each of the set key frames. The reception unit receives, in a state when one or more key frames are set on the time axis and a display form of the object is instructed at each of the set key frames, an instruction for reflecting a display form of the object instructed to a first key frame of an already set one or more key frames as a display form of the object at a second key frame set on the time axis. A display control unit displays indicators representing the key frames.
    Type: Grant
    Filed: December 6, 2010
    Date of Patent: October 4, 2016
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Kenichiro Nakagawa, Makoto Hirota, Rei Ishikawa, Masayuki Ishizawa
  • Patent number: 9437038
    Abstract: Approaches enable image content (e.g., still or video content) to be displayed in such a way that the image content will appear, to a viewer, to include portions with different locations in physical space, with the relative positioning of those portions being determined at least in part upon a current relative position and/or orientation of the viewer with respect to the device, as well as changes in that relative position and/or orientation. For example, relationship pairs for image content capable of being displayed on a display screen can be determined. Based on the relationship pairs, a node hierarchy that includes position information for planes of content that include the image content can be determined.
    Type: Grant
    Filed: September 26, 2013
    Date of Patent: September 6, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Kevin Robert Costello, Christopher Wayne Lockhart
  • Patent number: 9437027
    Abstract: The subject disclosure is directed towards layered image understanding by which a layered scene representation is generated for an image. Providing such a scene representation explains a scene being presenting in the image by defining that scene's semantic structure. To generate the layered scene representation, the subject disclosure recognizes objects within the image by combining objects sampled from annotated image data and determining whether that combination is both semantically well-formed and matches the visual appearance of the image. The objects are transformed and then can be used to modify the query image. The subject disclosure models the objects into semantic segments that form a portion of the scene representation.
    Type: Grant
    Filed: June 3, 2013
    Date of Patent: September 6, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Phillip John Isola, Ce Liu
  • Patent number: 9430867
    Abstract: An image processing apparatus may include a storage unit to store a lookup table (LUT) including information on corresponding relations between an occlusion vector related to at least one point of a 3-dimensional (3D) object and a spherical harmonics (SH) coefficient; and a rendering unit to determine a first SH coefficient corresponding to a first occlusion vector related to a first point of the 3D object using the LUT and to determine a pixel value of the first point using the first SH coefficient.
    Type: Grant
    Filed: November 20, 2013
    Date of Patent: August 30, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: In Woo Ha
  • Patent number: 9430878
    Abstract: A head mounted display and a control method thereof are disclosed. The control method comprises following steps. An application processor controls a pico projector unit to project a virtual image having a virtual object located on a virtual image coordinate in a virtual image coordinate system. An eye image sensing unit captures an eye image data. A sensing apparatus senses a touch object to output a sensing data. An ASIC obtains a real image coordinate of the touch object in a real image coordinate system according to the sensing data. The ASIC obtains a pupil position according to the eye image data, and controls the adjustment unit to adjust an imaging position of the virtual image according to the pupil position. The ASIC determines whether the touch object touched the virtual object according to the pupil position, the real coordinate and the virtual coordinate.
    Type: Grant
    Filed: April 24, 2014
    Date of Patent: August 30, 2016
    Assignee: QUANTA COMPUTER INC.
    Inventors: Chung-Te Li, Wen-Chu Yang
  • Patent number: 9424239
    Abstract: A shared renderer maintains shared state information to which two or more augmented reality application contribute. The shared renderer then provides a single output presentation based on the shared state information. Among other aspects, the shared renderer includes a permission mechanism by which applications can share information regarding object properties. The shared renderer may also include: a physics engine for simulating movement of at least one object that is represented by the shared state information; an annotation engine for managing a presentation of annotations produced by plural applications; and/or an occlusion engine for managing the behavior of the output presentation when two or more objects, produced by two or more applications, overlap within the output presentation.
    Type: Grant
    Filed: September 6, 2013
    Date of Patent: August 23, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alan M. Dunn, Tadayoshi Kohno, David A. Molnar, Alexander N. Moshchuk, Franziska Roesner, Jiahe Helen Wang
  • Patent number: 9424678
    Abstract: A method for implementing teleconferences when at least one participant receives 3-D data. A data rendering device presents data in a 3-D format or in pseudo 3-D format. A 3-D image is formed on a user computer system. The 3-D presentation is calculated by a local computer system. A block of user avatars is formed on the local computer system for all teleconference participants (including a local participant). The participant avatar includes a kinetic model (joints, muscles, body parts, etc.). The avatar includes a participant's behavior model (gestures, fingers, mimics, etc). The avatar also includes an avatar skin. The avatar skin includes a stable (unchangeable) part of the participant containing face and visible body parts, as well as modifiable parts (e.g., clothes, shoes, accessories, etc.).
    Type: Grant
    Filed: August 16, 2013
    Date of Patent: August 23, 2016
    Assignee: Acronis International GmbH
    Inventors: Anton Enakiev, Alexander G. Tormasov, Serguei M. Beloussov, Juri V. Tsibrovski, Stanislav S. Protassov
  • Patent number: 9418467
    Abstract: A pedestrian pose classification model is trained. A three-dimensional (3D) model of a pedestrian is received. A set of image parameters indicating how to generate an image of a pedestrian is received. A two-dimensional (2D) synthetic image is generated based on the received 3D model and the received set of image parameters. The generated synthetic image is annotated with the set of image parameters. A plurality of pedestrian pose classifiers is trained through the annotated synthetic image.
    Type: Grant
    Filed: November 20, 2013
    Date of Patent: August 16, 2016
    Assignee: Honda Motor Co., Ltd.
    Inventor: Bernd Heisele
  • Patent number: 9406138
    Abstract: In one embodiment, a technique is provided for semi-automatically extracting a polyline from a linear feature in a point cloud. The user may provide initial parameters, including a point about the linear feature and a starting direction. A linear feature extraction process may automatically follow the linear feature beginning in the starting direction from about the selected point. The linear feature extraction process may attempt to follow a linear segment of the linear feature. If some points may be followed that constitute a linear segment, a line segment modeling the linear segment is created. The linear feature extraction process then determines whether the end of the linear feature has been reached. If the end has not been reached, the linear feature extraction process may repeat. If the end has been reached, the linear feature extraction process may return the line segments and create a polyline from them.
    Type: Grant
    Filed: September 17, 2013
    Date of Patent: August 2, 2016
    Assignee: Bentley Systems, Incorporated
    Inventor: Mathieu St-Pierre
  • Patent number: 9390541
    Abstract: In accordance with some embodiments, a tile shader executes on a group of pixels prior to a pixel shader. The tile of pixels may be rectangular in some embodiments. The tile may be executed hierarchically, refining each tile into smaller subtiles until the pixel or sample level is reached. The tile shader program can be written to discard groups of pixels, thereby quickly removing areas of the bounding triangles that lie outside the shape being rasterized or quickly discarding groups of pixel shader executions that will not contribute to the final image.
    Type: Grant
    Filed: April 9, 2013
    Date of Patent: July 12, 2016
    Assignee: Intel Corporation
    Inventors: Jon N. Hasselgren, Tomas G. Akenine-Moller, Carl J. Munkberg, Jim K. Nilsson, Robert M. Toth, Franz P. Clarberg
  • Patent number: 9378579
    Abstract: In various embodiments, a cloth weave structure is built from curves over the surface of a subdivision mesh at rendertime. A coherent woven or knitted surface is generated from interwoven curve geometry and a subdivision (or polygon) mesh. In one aspect, this is done at render-time. Accordingly, in one embodiment, a geometry generation process takes an ST map as input to control the direction of flow of curves (yarns) over the surface. Since each face is calculated independently, general global coordinates in ST space are predefined (at the beginning of the render) to make sure that each face transitions smoothly to the next.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: June 28, 2016
    Assignee: Pixar
    Inventor: Philip Child
  • Patent number: 9355436
    Abstract: A first depth map is generated in response to a stereoscopic image from a camera. The first depth map includes first pixels having valid depths and second pixels having invalid depths. In response to the first depth map, a second depth map is generated for replacing at least some of the second pixels with respective third pixels having valid depths. For generating the second depth map, a particular one of the third pixels is generated for replacing a particular one of the second pixels. For generating the particular third pixel, respective weight(s) is/are assigned to a selected one or more of the first pixels in response to value similarity and spatial proximity between the selected first pixel(s) and the particular second pixel. The particular third pixel is computed in response to the selected first pixel(s) and the weight(s).
    Type: Grant
    Filed: October 24, 2012
    Date of Patent: May 31, 2016
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Buyue Zhang, Aziz Umit Batur
  • Patent number: 9354789
    Abstract: Organizing information around a specific spatial domain facilitates managing objects presented in visualization layers of the spatial domain. A first portion of a first program for organizing and mapping information around a specific spatial domain is executed by a first virtual system that is created in a program execution environment operable on a network server. In response to the first virtual system invoking a continuation, a second virtual system is created to execute a second portion of the first program. Invoking the continuation in the program execution environment facilitates each of the first and second virtual systems providing only the capabilities necessary to execute their respective portion of the first program. Optionally, executing the first program includes interpreting the first program with a second program.
    Type: Grant
    Filed: July 24, 2013
    Date of Patent: May 31, 2016
    Inventor: Thomas M. Stambaugh
  • Patent number: 9326675
    Abstract: The virtual vision correction technique described herein pertains to a technique for determining a user's vision characteristics and/or adjusting a display to a user's vision characteristics. The virtual vision correction technique described herein provides vision correction for people with vision problems by making the video displayed adapt to the person's vision correction needs. In one embodiment of the technique, the user can state their vision prescription needs and the video displayed is processed to appear “20/20” to that person. Alternatively, the video display, for example, video display glasses, can “auto-tune” a representative image (such as an eye chart) to determine the appropriate video processing for that person.
    Type: Grant
    Filed: December 24, 2009
    Date of Patent: May 3, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Douglas Christopher Burger, David Alexander Molnar
  • Patent number: 9324173
    Abstract: A method of rendering an electronic graphical representation of a user of a computerized system includes providing a plurality of states for the electronic graphical representation including first and second differing states, monitoring a measurable quantity to provide a monitored quantity, and changing a state of the graphical representation from the first state to the second state based upon the monitored quantity. The graphical representation is an avatar and the method includes defining a receptor point associated with the avatar and associating an object with the receptor point. The receptor point is located on the avatar. The plurality of states includes a non-hybrid state. The plurality of states includes a hybrid state. The hybrid state is a static image hybrid state and a video hybrid state. The video hybrid state is a live video hybrid state and a pre-recorded video hybrid state.
    Type: Grant
    Filed: July 17, 2008
    Date of Patent: April 26, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vittorio Castelli, Rick A. Hamilton, II, Clifford A. Pickover, John J. Ritsko
  • Patent number: 9324182
    Abstract: Techniques for single pass radiosity from depth peels are described. In one or more embodiments, radiosity for frames of a graphics presentation is computed using depth peel techniques. This may occur by rendering geometry for a frame and then computing two depth peels per frame based on the geometry, which can be used to determine occlusion of secondary bounce lights as well as color and intensity of third bounce lights for radiosity. The two depth peels may be generated in a single rendering pass by reusing rejected geometry of a front depth peel as geometry for a back depth peel. The use of depth peels in this manner enables accelerated radiosity computations for photorealistic illumination of three dimensional graphics that may be performed dynamically at frame rates typical for real-time game play and other graphics presentations.
    Type: Grant
    Filed: August 1, 2012
    Date of Patent: April 26, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Corrinne Yu
  • Patent number: 9310885
    Abstract: A method of augmenting a first stereoscopic image, comprising a pair of images, is provided. The method includes generating a disparity map from the pair of images of the first stereoscopic image. The disparity map is indicative of distances in the first stereoscopic image. The method further includes generating a virtual three-dimensional model responsive to the distances indicated by the disparity map, modeling an interaction of a virtual object with that three dimensional model, and outputting, for display, an image corresponding to the first stereoscopic image that comprises a visible effect of the interaction of the virtual object with the three dimensional model.
    Type: Grant
    Filed: November 6, 2013
    Date of Patent: April 12, 2016
    Assignee: Sony Computer Entertainment Europe Limited
    Inventors: Sharwin Winesh Raghoebardayal, Simon Mark Benson, Ian Henry Bickerstaff