Patents Examined by Terrell Robinson
  • Patent number: 9646564
    Abstract: An information processing apparatus that is capable of displaying, when displaying a large number of contents in a manner divided into a plurality of pages, the contents in a manner such that continuity between each other is maintained, and enables a user to easily recognize the contents located in the vicinity of each page boundary. The information processing apparatus includes a CPU which selects and arranges the contents such that contents selected as objects to be displayed are redundant between adjacent display sections at a predetermined ratio, and subjects displays the contents to screen display in a display area, on a display section-by-display section basis.
    Type: Grant
    Filed: January 15, 2013
    Date of Patent: May 9, 2017
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Akihiro Hamana, Satoshi Watanabe, Koichi Tanabe, Satoshi Igeta
  • Patent number: 9646362
    Abstract: Example embodiments reduce the processing required to zoom on graphical data visualizations by transforming only graphic elements visible in the zooming viewport. In one example embodiment, a grid overlays the component image. Prior to zooming, grid elements covered by the zooming viewport are determined and only graphic objects bounded by those grid elements are transformed during zooming.
    Type: Grant
    Filed: September 18, 2013
    Date of Patent: May 9, 2017
    Assignee: Oracle International Corporation
    Inventors: Yi Dai, Hugh Zhang, Jairam Ramanathan, Prashant Singh
  • Patent number: 9630631
    Abstract: A computer-implemented method and system for in-vehicle dynamic virtual reality includes determining a spatial environment around a vehicle and one or more maneuver paths for the vehicle in the spatial environment. The method includes updating a virtual view based on the spatial environment and the maneuver paths. Updating the virtual view includes augmenting one or more components of a virtual world model to indicate the spatial environment and the maneuver paths. The virtual view is rendered to an output device. The method includes generating a vehicle maneuver request for the vehicle. The vehicle maneuver request includes at least a desired vehicle maneuver and the vehicle maneuver request is based at least in part on the spatial environment. The method includes controlling one or more vehicle systems of the vehicle based on the vehicle maneuver request.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: April 25, 2017
    Assignee: Honda Motor Co., Ltd.
    Inventors: Arthur Alaniz, Joseph Whinnery, Robert Wesley Murrish, Michael Eamonn Gleeson-May
  • Patent number: 9628770
    Abstract: A method and system for rendering scenes in stereoscopic 3-D comprises identifying, or detecting, that a rate of change, of one or more elements of a scene to be rendered in stereoscopic 3-D, satisfies a criterion. The perceived depth of elements of the scene is then dynamically modified, and the scene is rendered. The method can reduce eye strain of a viewer of the scene, since the perceived difference in depth of objects or elements of the scene is reduced while the object is changing position or visibility dramatically.
    Type: Grant
    Filed: June 14, 2012
    Date of Patent: April 18, 2017
    Assignee: BlackBerry Limited
    Inventors: Marcus Eriksson, Dan Zacharias Gardenfors
  • Patent number: 9599819
    Abstract: A method for in-vehicle dynamic virtual reality includes receiving vehicle data from one or more vehicle systems of a vehicle, wherein the vehicle data includes vehicle dynamics data and receiving user data from a virtual reality device. The method includes generating a virtual view based on the vehicle data, the user data and a virtual world model, the virtual world model including one or more components that define the virtual view, wherein generating the virtual view includes augmenting one or more components of the virtual world model according to at least one of the vehicle data and the user data and rendering the virtual view to an output device by controlling the output device to update display of the virtual view according to the vehicle dynamics data.
    Type: Grant
    Filed: May 30, 2014
    Date of Patent: March 21, 2017
    Assignee: Honda Motor Co., Ltd.
    Inventors: Arthur Alaniz, Joseph Whinnery, Robert Wesley Murrish, Michael Eamonn Gleeson-May
  • Patent number: 9558531
    Abstract: A graphics processing method for three-dimensional images, applied to a first buffer for storing right-view contents and a second buffer for storing left-view contents, includes the following steps: when a current Vsync status indicates that a display engine is not operating within a right Vsync period of a right-view frame, the drawing engine drawing the right-view contents stored in first buffer; when current Vsync status indicates that the display engine is not operating within a left Vsync period of a left-view frame, the drawing engine drawing the left-view contents stored in second buffer; during the right Vsync period of the right-view frame, the display engine displaying right-view contents stored in first buffer; and during the left Vsync period of the left-view frame, the display engine displaying left-view contents stored in second buffer.
    Type: Grant
    Filed: November 25, 2015
    Date of Patent: January 31, 2017
    Assignee: MEDIATEK INC.
    Inventors: Te-Chi Hsiao, Chin-Jung Yang
  • Patent number: 9547173
    Abstract: A method for in-vehicle dynamic virtual reality includes receiving vehicle data from one or more vehicle systems of a vehicle, wherein the vehicle data includes vehicle dynamics data and receiving user data from a virtual reality device. The method includes generating a virtual view based on the vehicle data, the user data and a virtual world model, the virtual world model including one or more components that define the virtual view, wherein generating the virtual view includes augmenting one or more components of the virtual world model according to at least one of the vehicle data and the user data and rendering the virtual view to an output device by controlling the output device to update display of the virtual view according to the vehicle dynamics data.
    Type: Grant
    Filed: February 11, 2014
    Date of Patent: January 17, 2017
    Assignee: Honda Motor Co., Ltd.
    Inventors: Arthur Alaniz, Joseph Whinnery, Robert Murrish, Michael Eamonn Gleeson-May
  • Patent number: 9536353
    Abstract: A method for in-vehicle dynamic virtual reality, including receiving vehicle data from a portable device, the portable device operably connected for computer communication to an output device, the vehicle data including vehicle dynamics data, and receiving user data from at least one of the portable device or the output device. The method including generating a virtual view based on the vehicle data, the user data and a virtual world model, the virtual world model including one or more components that define the virtual view, wherein generating the virtual view includes augmenting one or more components of the virtual world model according to at least one of the vehicle data or the user data. The method including rendering the virtual view to the output device by controlling the output device to update display of the virtual view according to at least one of the vehicle data or the user data.
    Type: Grant
    Filed: July 10, 2014
    Date of Patent: January 3, 2017
    Assignee: Honda Motor Co., Ltd.
    Inventors: Arthur Alaniz, Joseph Whinnery, Robert Wesley Murrish, Michael Eamonn Gleeson-May
  • Patent number: 9538164
    Abstract: The example techniques of this disclosure are directed to generating a stereoscopic view from an application designed to generate a mono view. For example, the techniques may modify instructions for a vertex shader based on a viewing angle. When the modified vertex shader is executed, the modified vertex shader may generate coordinates for vertices for a stereoscopic view based on the viewing angle.
    Type: Grant
    Filed: January 10, 2013
    Date of Patent: January 3, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Ning Bi, Xuerui Zhang
  • Patent number: 9520102
    Abstract: Systems and methods for extracting text from images rendered on a display screen, the method comprising capturing a color image rendered on a display screen; and transforming the color image to binary color image, preserving text-like graphic components and filtering out non-text-like graphical components. The transforming comprises scanning one or more areas of the color image; and detecting continuous bi-tonal regions in the scanned one or more areas, wherein the continuous bi-tonal regions have large variances.
    Type: Grant
    Filed: April 29, 2013
    Date of Patent: December 13, 2016
    Assignee: International Business Machines Corporation
    Inventors: Amir Geva, Mattias Marder
  • Patent number: 9514547
    Abstract: When a driving support apparatus is under automatic driving of a vehicle or an automatic driving button is pressed under manual driving, vicinity image data is acquired from an in-vehicle camera. When a predetermined target object is recognized in the vicinity image data, a visibility reduction process is applied to image data of the recognized target object. The visibility reduction process applies at least one of defocusing; decreasing color information; and decreasing edge intensity, to the image data of the recognized target object. In contrast, any visibility reduction process is not applied to any other image data other than the image data of the recognized target object. An image display apparatus displays the vicinity image by a combination of the image data of the recognized target object of which the visibility is reduced and the other image data of which the visibility is not reduced.
    Type: Grant
    Filed: July 9, 2014
    Date of Patent: December 6, 2016
    Assignee: DENSO CORPORATION
    Inventors: Yasutsugu Nagatomi, Tadashi Kamada, Akira Takahashi, Yukimasa Tamatsu, Ryusuke Hotta, Shohei Morikawa
  • Patent number: 9460553
    Abstract: Locations are shaded for use in rendering a computer-generated scene having one or more objects represented by the point cloud. A hierarchy for the point cloud is obtained. The point cloud includes a plurality of points. The hierarchy has a plurality of clusters of points of the point cloud. A location is selected to shade. A first cluster from the plurality of clusters is selected. The first cluster represents a first set of points in the point cloud. An importance weight for the first cluster is determined. A render-quality criterion for the first cluster is determined based on the importance weight. Whether the first cluster meets a render-quality criterion is determined based on a render-quality parameter for the first cluster. In response to the first cluster meeting the quality criterion, the location is shaded based on an indication of light emitted from the first cluster.
    Type: Grant
    Filed: June 18, 2012
    Date of Patent: October 4, 2016
    Assignee: DreamWorks Animation LLC
    Inventor: Eric Tabellion
  • Patent number: 9460489
    Abstract: An image processing apparatus is provided, which performs alignment of pixels on a reference frame and on a standard frame, executes an image reconfiguration based on an alignment result, and generates a high resolution image of a frame from low resolution images of the reference and the standard frames, the apparatus including a memory configured to store an input video and to output an image processed video; and a plurality of arithmetic processors configured to perform in parallel, at the time of the alignment, a first alignment processing in a first direction from the reference to the standard frame by a first arithmetic processor, and a second alignment processing in a second direction from the standard to the reference frame by a second arithmetic processor, the second direction being opposite to the first direction, processing results of the first and the second alignment processings being shared among the processors.
    Type: Grant
    Filed: March 3, 2015
    Date of Patent: October 4, 2016
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Kenji Kimiyama, Toshio Sato, Yoshihiko Suzuki
  • Patent number: 9454836
    Abstract: In an object display device, a pattern extraction unit extracts a region where an object is easily visually recognized when the object is overlaid and displayed in an image in real space, from the image in real space, based on information about the size and color of the object and information about the color of the image in real space that are acquired by an image analysis unit, and a display position correction unit corrects the display position of the object to this region. This facilitates visual recognition of the object and enhances various effects, such as informativeness, brought about by the displaying of the object in the image in real space.
    Type: Grant
    Filed: October 13, 2011
    Date of Patent: September 27, 2016
    Assignee: NTT DOCOMO, INC.
    Inventors: Yasuo Morinaga, Manabu Ota
  • Patent number: 9451181
    Abstract: A method includes reading a composite video descriptor data structure and a plurality of window descriptor data structures. The composite video descriptor data structure defines a width and height of a composite video frame and each window descriptor data structure defines the starting X and Y coordinate, width and height of each constituent video window to be rendered in the composite video frame. The method further includes determining top and bottom Y coordinates for each constituent video window, as well as determining left and right X coordinates for each constituent video window. The method also includes dividing each constituent video window using the top and bottom Y coordinates to obtain Y-divided sub-windows, dividing each Y-divided sub-window using left and right X coordinates to obtain X and Y divided sub-windows, and storing X, Y coordinates of opposing corners of each X and Y divided sub-window in the storage device.
    Type: Grant
    Filed: September 23, 2015
    Date of Patent: September 20, 2016
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Sujith Shivalingappa, Sivaraj Rajamonickam
  • Patent number: 9412206
    Abstract: Systems and methods for the manipulation of captured light fields and captured light field image data in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, a system for manipulating captured light field image data includes a processor, a display, a user input device, and a memory, wherein a depth map includes depth information for one or more pixels in the image data, and wherein an image manipulation application configures the processor to display a first synthesized image, receive user input data identifying a region within the first synthesized image, determine boundary data for the identified region using the depth map, receive user input data identifying at least one action, and perform the received action using the boundary data and the captured light field image data.
    Type: Grant
    Filed: February 21, 2013
    Date of Patent: August 9, 2016
    Assignee: Pelican Imaging Corporation
    Inventors: Andrew Kenneth John McMahon, Kartik Venkataraman, Robert Mullis
  • Patent number: 9384522
    Abstract: In general, techniques are described for analyzing a command stream that configures a graphics processing unit (GPU) to render one or more render targets. A device comprising a processor may perform the techniques. The processor may be configured to analyze the command stream to determine a representation of the one or more render targets defined by the command stream. The processor may also be configured to, based on the representation of the render targets, and identify one or more rendering inefficiencies that will occur upon execution of the command stream by the GPU. The processor may also be configured to re-order one or more commands in the command stream so as to reduce the identified rendering inefficiencies that will occur upon execution of the command stream by the GPU.
    Type: Grant
    Filed: February 25, 2013
    Date of Patent: July 5, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Paul Frascati, Avinash Seetharamaiah
  • Patent number: 9355454
    Abstract: A hierarchical multi-object active appearance model (AAM) framework is disclosed for processing image data, such as localizer or scout image data. In accordance with this approach, a hierarchical arrangement of models (e.g., a model pyramid) maybe employed where a global or parent model that encodes relationships across multiple co-located structures is used to obtain an initial, coarse fit. Subsequent processing by child sub-models add more detail and flexibility to the overall fit.
    Type: Grant
    Filed: March 28, 2013
    Date of Patent: May 31, 2016
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Qi Song, Srikrishnan V, Roshni Rustom Bhagalia, Bipul Das
  • Patent number: 9355478
    Abstract: Methods, computer-readable media, and systems for reflecting changes to graph-structured data are provided. One method for reflecting changes to graph-structured data includes receiving a plot of a number of viewed nodes that are viewed on a first graph, constructing a second graph in response to a change in the graph-structured data associated with the first graph, by determining a number of viewed edges between a first viewed node of the number of viewed nodes and a remainder of the number of viewed nodes, determining a number of node connects for the first viewed node, and providing an indication of completeness of the first viewed node based on whether the number of node connects is greater than the number of viewed edges between the first viewed node and the number of viewed nodes, and merging the first graph and the second graph to create a merged graph.
    Type: Grant
    Filed: July 15, 2011
    Date of Patent: May 31, 2016
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Jan Simon, Jan Trcka, Pavel Chmelicek
  • Patent number: 9330487
    Abstract: An apparatus for processing a three-dimensional (3D) image is provided. The apparatus includes a motion estimation module and a motion interpolation module. The motion estimation module estimates a motion vector between a first object in a first-eye image and a second object in a second-eye image. The first object is the same as or similar to the second object. The motion interpolation module multiplies the motion vector by a first shift ratio to generate a first motion vector. The motion interpolation module generates a shifted first object by interpolation according to the first motion vector and the first object.
    Type: Grant
    Filed: January 10, 2013
    Date of Patent: May 3, 2016
    Assignee: MStar Semiconductor, Inc.
    Inventors: Chung-Yi Chen, Chien-Chuan Yao