Abstract: A method for displaying at least one image from a set of images while a computerized device such as a Smartphone is charging. The method, which will often be a software implemented method such as a downloadable Smartphone “app” will generally comprise using the Smartphone's graphical user interface to select a set of images to display while said Smartphone is charging, and also using the Smartphone's graphical user interface to determine a set of parameters to use to control the display of these images while the Smartphone is charging. The invention's software will additionally determine the charging status of the Smartphone. In operation, while the Smartphone is charging, the software will then display these images, often according to the previously entered set of parameters used to control the display of these images.
Abstract: The invention relates to generating a composite medical image combining at least first and second image data. Particularly, the invention relates to a medical imaging system for generating a composite medical view or image combining at least first and second image data as well as a method for generating a composite medical image.
Abstract: An electronic device for displaying a three-dimensional image and a method of using the same, and more particularly, to an electronic device for displaying a three-dimensional image and a method of using the same that can provide a user interface for controlling positions of a three-dimensional icon and a virtual layer including the same according to a user gesture are provided. The electronic device for displaying a three-dimensional image includes a camera for photographing a gesture action in three-dimensional space; a display unit for displaying a virtual layer including at least one object with a first depth at three-dimensional virtual space; and a controller for selectively performing one of a first action of changing a depth in which the virtual layer is displayed to a second depth and a second action of changing a position of the object, according to the gesture action based on a gesture input mode.
Type:
Grant
Filed:
August 1, 2011
Date of Patent:
May 12, 2015
Assignee:
LG Electronics Inc.
Inventors:
Soungmin Im, Sunjin Yu, Sangki Kim, Kyungyoung Lim, Yongwon Cho, Taehyeong Kim
Abstract: A user interface for a map display application used on a computing device includes a strip of photographic images corresponding to objects located within a geographic location represented by a map displayed on a screen. More specifically, the strip of photographic images is responsive to the viewing experience of the user.
Type:
Application
Filed:
September 14, 2012
Publication date:
May 7, 2015
Applicant:
GOOGLE INC.
Inventors:
Andrew Ofstad, Willem Van Lancker, Mathew R. Simpson, Bernhard Seefeld
Abstract: A system and method in accordance with example embodiments may include systems and methods for generating and transforming data presentation. The method may include receiving, using a processor, a request for a web page, and submitting, by the processor, the request to a computer server system. The request can include a user identification and a user password. The method may further include receiving, from the computer server system, data corresponding to the requested web page. Further, the method includes storing, in a memory, the received data, and causing the received data to be shown on a display associated with the user device.
Abstract: An image display apparatus that displays an image on the basis of input image signals corresponding to sub-pixels forming one pixel includes a shift-amount storing unit that stores shift amounts of display positions of the sub-pixels relative to given reference positions in a display image, an image-signal correcting unit that corrects the input image signals according to the shift amounts, and an image display unit that displays an image on the basis of the image signals corrected by the image-signal correcting unit.
Abstract: A method of grouping multiple waveforms for a single channel of acquired data on a display area uses a graphic icon with the display area. The graphic icon has a first portion with a symbol indicating the single channel and with an indicator defining a baseline for the display area. The graphic icon also has a second portion with symbols indicating which of the multiple waveforms currently are being displayed. The symbol for the single channel and the symbol for a selected one of the multiple waveforms currently being displayed are highlighted. The highlighting may be via color, where the highlight color corresponds to the color of the waveforms currently being displayed.
Type:
Grant
Filed:
March 30, 2012
Date of Patent:
May 5, 2015
Assignee:
Tektronix, Inc.
Inventors:
Ian S. Dees, Ngoc Giao Tran, Amy M. Bergsieker, Gary J. Waldo, Steven C. Herring, Tony Lee Tarr
Abstract: Information technology tools can be provided to manage access by a plurality of attendees through a network to a presentation. Each of the attendees is registered with an associated content access status, and presentation data for the presentation is provided to a registered attendee based on the particular content access status of the registered attendee.
Abstract: The system provides different ways for users to select an object and an action to be applied to the object in computer applications such as image processing or digital post-production. The user can select an object first and then an action, or vice versa. The user can also use gestural input to designate both an object and action virtually simultaneously. Multiple view, or windows, of an image can be independently sized, zoomed, panned, etc. Any effects performed on the image appear in all of the windows since each window shows (potentially) different portions of the same image content. A navigation window helps a user move within a large image or diagram that does not fit entirely on a single display screen. The navigation window includes an inner box that shows, in miniature, the principal objects in the screen display.
Type:
Grant
Filed:
November 17, 2011
Date of Patent:
May 5, 2015
Assignees:
Sony Corporation, Sony Electronics Inc.
Abstract: A system, method, and program product is provided that automatically allocates a display screen into two areas when the display screen is pivoted from a landscape orientation to a portrait orientation. A hypervisor receives a pivot request (e.g., from a user) to pivot the display screen from a landscape orientation to a portrait orientation. When the display screen is oriented in the landscape orientation, a primary operating system displays its data on the display screen. Upon reception of the pivot request, the hypervisor allocates the display screen into a primary display area and a secondary display area. The hypervisor then displays data originating from the primary operating system in the primary display area and displays data originating from a second operating system in the secondary display area.
Type:
Grant
Filed:
June 25, 2008
Date of Patent:
May 5, 2015
Assignee:
Lenovo (Singapore) Pte. Ltd.
Inventors:
Justin Tyler Dubs, Harriss Christopher Neil Ganey, Aaron Michael Stewart, Jennifer Greenwood Zawacki
Abstract: A computer-readable medium, computer-implemented method, and apparatus are provided. In one embodiment, financial data is extracted from a financial manager, and the financial data is mapped to compliance data from a compliance manager. One or more controls of one or more accounts from compliance data is selected to be in scope, and an assessment plan is created for the selected one or more controls within scope.
Type:
Grant
Filed:
April 16, 2010
Date of Patent:
May 5, 2015
Assignee:
Oracle International Corporation
Inventors:
Appla Jagadesh Padala, Dane Roberts, Bhaskar Ghosh, Hernan Capdevila
Abstract: A method and device for performing and processing user-defined clipping in object space to reduce the number of computations needed for the clipping operation. The method and device also combine the modelview transformation of the vertex coordinates with projection transform. The user-defined clipping in object space provides a higher performance and less power consumption by avoiding generation of eye coordinates if there is no lighting. The device includes a driver for the user-defined clipping in the object space to perform dual mode user-defined clipping in object space when a lighting function is disabled and in eye space when the lighting function is enabled.
Abstract: A user interface (UI) rendering and operating method is implemented by a processor with a deployment file, and comprises configuring the processor to implement a parent procedure for: parsing the deployment file so as to obtain UI rendering data that defines a visual design of the UI, feature provider data that defines a procedure providing a feature linked to a generated element of the UI, feature identification data that defines an identification corresponding to the feature, and element rendering data that defines a visual design of the element; binding the feature provider data and the feature identification data to the element; rendering the UI and the element with reference to the UI rendering data and the element rendering data.
Abstract: In accordance with example embodiments, hand gestures can be used to provide user input to a wearable computing device, and in particular to identify, signify, or otherwise indicate what may be considered or classified as important or worthy of attention or notice. A wearable computing device, which could include a head-mounted display (HMD) and a video camera, may recognize known hand gestures and carry out particular actions in response. Particular hand gestures could be used for selecting portions of a field of view of the HMD, and generating images from the selected portions. The HMD could then transmit the generated images to one or more applications in a network server communicatively connected with the HMD, including a server or server system hosting a social networking service.
Type:
Grant
Filed:
September 4, 2013
Date of Patent:
May 5, 2015
Assignee:
Google Inc.
Inventors:
Luis Ricardo Prada Gomez, Aaron Wheeler
Abstract: The disclosure provides a method and device for processing wallpaper. The method comprises: acquiring the position information of a mobile terminal; judging whether to replace wallpaper displayed by the mobile terminal with a content corresponding to preset position information or not according to the acquired position information and the preset position information; and displaying the content corresponding to the preset position information as the wallpaper based on that a judgment result is positive. By the disclosure, the problem of poor user experience when a user of the mobile terminal uses the wallpaper or a theme in the prior art is solved, and effects of automatically replacing the wallpaper and meeting diversified and personalized requirements of the user are further achieved.
Abstract: According to an embodiment, an image display apparatus includes a detection unit, a generation unit, and a display control unit. The detection unit detects areas to be focused in an input image. The generation unit generates a sub image by performing correction for improving visibility on an image of the detected area. The correction includes at least one of size correction for adjusting size of the image of the area, color correction for adjusting color of the image of the area, and distortion correction for transforming the image of the area so as to be an image acquired by viewing the image of the area from the front side. The display control unit displays the generated sub image on a display device together with the input image in a form that accompanies a screen representation representing a correspondence relation with the area.
Abstract: An electronic device includes an external cover connector connected to a display unit of an external cover; and a controller configured to change first information of data selected to be displayed on the display unit of the external cover to second information for the display unit of the external cover; and output the data with the second information to the display unit of the external cover while the external cover attached to the electronic device is closed over the electronic device.
Abstract: A medical image display apparatus 1 having a display component 9, an image data storage part 31, an information storage component 32, an operation component 10, and a control part 2. The display component 9 displays a medical image display screen P on which image display regions P1 to P4 are defined. The image data storage part 31 stores image data of medical images. The information storage component 34 stores process associating information 32a for associating combinations of the image display regions P1 to P4 with the processing content of image data of medical images.
Type:
Grant
Filed:
April 9, 2007
Date of Patent:
April 28, 2015
Assignees:
Kabushiki Kaisha Toshiba, Toshiba Medical Systems Corporation
Abstract: A framework for performing graphics animation and compositing operations has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media, or any other type of object for a user interface of an application. The application commits change to the state of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, an animation is determined for animating the change in state. In determining the animation, the framework can define a set of predetermined animations based on motion, visibility, and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer for display on the computer system.
Type:
Grant
Filed:
August 4, 2006
Date of Patent:
April 28, 2015
Assignee:
Apple Inc.
Inventors:
Ralph Brunner, John Harper, Peter Graffagnino
Abstract: A method, computer program and apparatus are disclosed for generating a display image for a navigation device, wherein the display image includes a map view for display on the navigation device and the map view includes a two dimensional plan view. At least one embodiment of the method includes identifying a plurality of map objects from a digital map for display in a display image on a navigation device; determining whether any of the identified map objects include one or more non-visible features that would not be visible to a user of the navigation device at ground level; generating an adapted set of map objects that does not include any of the non-visible features; and generating, from the adapted set of map objects, a simplified display image for display on the navigation device, wherein the simplified display image does not include any of the non-visible features.
Abstract: In one example, a method for rendering graphical objects on a display includes rendering each of a plurality of graphical objects within respective layers. The plurality of graphical objects includes first, second and third graphical objects. The second graphical object is directly linked to the first graphical object and the third graphical object is directly linked to the second graphical object. The method additionally includes receiving user input manipulating one of the graphical objects in a manner that affects a directly linked graphical object. The method additionally includes re-rendering the manipulated graphical object and the directly linked graphical object without re-rendering graphical objects that are not directly linked to the manipulated graphical object.
Abstract: A real-time stereo video signal of a captured scene with a physical foreground object and a physical background is received. In real-time, a foreground/background separation algorithm is used on the real-time stereo video signal to identify pixels from the stereo video signal that represent the physical foreground object. A video sequence may be produced by rendering a 3D virtual reality based on the identified pixels of the physical foreground object.
Type:
Grant
Filed:
November 28, 2011
Date of Patent:
April 28, 2015
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Thore KH Graepel, Andrew Blake, Ralf Herbrich
Abstract: Provided are a method of generating a resulting image as if drawn by an artist and an apparatus for executing the method. The apparatus includes a first generation unit configured to generate a vector field expressing a shape of an image using feature pixels of the image captured by an image device and direction information of the feature pixels, a second generation unit configured to generate a structure grid indicating a structure for rendering the shape of the image using the vector field, and a rendering unit configured to render primitives expressing predetermined tones on the generated structure grid. Accordingly, it is possible to automatically and rapidly generate a resulting image from one image. Anyone can easily generate a hedcut from one photo without the limitation that a limited number of artists need to invest a great deal of time to complete one hedcut.
Type:
Grant
Filed:
May 26, 2011
Date of Patent:
April 28, 2015
Assignee:
Postech Academy—Industry Foundation
Inventors:
Seung Yong Lee, Yun Jin Lee, Min Jung Son, Henry Kang
Abstract: Methods and apparatus for specifying complex continuous gradients. A field blur tool may provide a user interface through which users may apply instances of a field blur pattern. The field blur tool allows the user to place one, two, or more pins over the image and to specify the blur amount (blur radius) at each field blur pin. A blur algorithm distributes the blur values for the one or more instances of the field blur pattern over the entire image, applying the blur according to the locations of the pin(s) and blur parameters at the pin(s). If the input indicates the location and the value for the blur radius of each of two or more instances of the field blur pattern, the two or more instances of the field blur pattern are combined in a blur mask by multiplying normalized radius fields of each of the instances.
Type:
Grant
Filed:
July 27, 2012
Date of Patent:
April 28, 2015
Assignee:
Adobe Systems Incorporated
Inventors:
Chintan Intwala, Gregg D. Wilensky, Baljit S. Vijan, Mausoom Sarkar
Abstract: Systems, methods and products for animating non-humanoid characters with human motion are described. One aspect includes selecting key poses included in initial motion data at a computing system; obtaining non-humanoid character key poses which provide a one to one correspondence to selected key poses in said initial motion data; and statically mapping poses of said initial motion data to non-humanoid character poses using a model built based on said one to one correspondence from said key poses of said initial motion data to said non-humanoid character key poses. Other embodiments are described.
Type:
Grant
Filed:
December 2, 2013
Date of Patent:
April 28, 2015
Assignee:
Disney Enterprises, Inc.
Inventors:
Jessica Kate Hodgins, Katsu Yamane, Yuka Ariki
Abstract: Techniques for rendering a modeled scene are disclosed. In some embodiments, a database comprising locally available generic object definitions is maintained at a destination device at which a scene is desired to be rendered. A scene is rendered by configuring one or more locally available generic object definitions obtained from the database according to a received specification of a modeled scene, wherein rendering is constrained to rendering locally available objects whose definitions are included in the database.
Abstract: A computer-implemented method for monitoring functions in an instrumentation center, the method comprising: accessing, by a monitoring system that monitors functional operation of the instrumentation center, layout information indicative of a layout of the instrumentation center, the layout information specifies a number of levels in the instrumentation center and types of components in each of the levels; detecting, by the monitoring system, a location of a portable display device that is connected over a network to the monitor system; and generating, from location data that specifies the location of the portable display device, information for a graphical user interface that when rendered on the portable display device renders a visualization of that portion of the instrumentation center in which the portable display device is currently located.
Abstract: Aspects of the disclosure relate generally to providing a user with an image navigation experience. In order to do so, a reference image may be identified. A set of potential target images for the reference image may also be identified. An area of the reference image is identified. For each particular image of the set of potential target images an associated cost for the identified area is determined based at least in part on a cost function for transitioning between the reference image and the particular target image. A target image is selected for association with the identified area based on the determined associated cost functions.
Abstract: A method includes defining a surface within a first captured image of an environment. The defined surface is identified in a second captured image of the environment. A graphic is overlaid on the surface identified in the second captured image. The second captured image is caused to be displayed to preview the graphic in the environment.
Type:
Grant
Filed:
March 4, 2011
Date of Patent:
April 21, 2015
Assignee:
Hewlett-Packard Development Company, L.P.
Inventors:
Jean-Frederic Plante, Eric G Wiesner, David Edmondson
Abstract: An image control apparatus wherein an acquisition unit acquires position information from attribute information of the image, a setting unit sets a display scale of a map when displaying the image on the map, and a generation unit generates display data for displaying the map and the image on a display device using the acquired position information and the set display scale, when the set display scale is lower than a predetermined display scale, the generation unit generates display data in which the image is laid out at a position corresponding to the position information on the map, and when the display scale is higher than the predetermined display scale, the generation unit generates display data in which the image and the map are laid out without laying out the image on the map.
Abstract: Systems, apparatus, computer software code products and methods for enabling computer graphics systems to accurately render transparency comprise configuring a shader element of the computer graphics system to first extract all layers of an image representation and obtain depth information therefrom, and then rendering all layers back-to-front with shading enabled, such that information obtained from a previously processed layer is available for processing of a current layer.
Abstract: One preferred embodiment of the present invention includes a method for transitioning a user interface between viewing modes. The method of the preferred embodiment can include detecting an orientation of a mobile terminal including a user interface disposed on a first side of the mobile terminal, wherein the orientation of the mobile terminal includes an imaginary vector originating at a second side of the mobile terminal and projecting in a direction substantially opposite the first side of the mobile terminal. The method of the preferred embodiment can also include transitioning between at least two viewing modes in response to the imaginary vector intersecting an imaginary sphere disposed about the mobile terminal at a first latitudinal point having a predetermined relationship to a critical latitude of the sphere.
Type:
Application
Filed:
November 4, 2014
Publication date:
April 9, 2015
Inventors:
Terrence Edward McArdle, Benjamin Zeis Newhouse
Abstract: According to one embodiment, a given tile, made up of pixels or samples, may be of any shape, including a square shape. These pixels may contain colors, depths, stencil values, and other values. Each tile may be further augmented with a single bit, referred to herein as a render bit. In one embodiment, if the render bit is one, then everything is rendered as usual within the tile. However, if the render bit is zero, then nothing is rasterized to this tile and, correspondingly, depth tests, pixel shading, frame buffer accesses, and multi-sampled anti-aliasing (MSAA) resolves are not done for this tile. In other embodiments, some operations may be done nevertheless, but at least one operation is avoided based on the render bit. Of course, the render bits may be switched such that the bit zero indicates that everything should be rendered and the bit one indicates more limited rendering.
Type:
Application
Filed:
October 7, 2013
Publication date:
April 9, 2015
Inventors:
Tomas G. Akenine-Moller, Carl J. Munkberg, Franz P. Clarberg
Abstract: A method, computer program product and system for automatically determining an object display mode to provide a display for objects. Information about the objects to be displayed and information about a display area is received. An object display mode is selected according to the received information about the display area and according to the received information about the objects to be displayed. A display for the objects is then provided with the selected object display mode. Switching can be made between a single-page display mode and a paging display mode, and whether in the single-page display mode or in the paging display mode, the user can conveniently browse and select the display objects, and the browsing efficiency and user experience of object display for the user are improved.
Type:
Grant
Filed:
May 11, 2011
Date of Patent:
April 7, 2015
Assignee:
International Business Machines Corporation
Abstract: A system for managing a Remote User Interface (RUI) includes a Remote User Interface Server (RUIS) for sending an update message indicating that an RUI list including at least one RUI has been updated, and transmitting updated RUI list information upon receiving a request for updated RUI list information; and a Remote User Interface Client (RUIC) for sending a request for updated RUI list information to the RUIS upon recognizing from the update message received from the RUIS that an RUI list has been updated, receiving the updated RUI list information from the RUIS in response to the request, and updating existing RUI list information based on the updated RUI list information.
Abstract: Many cameras have the ability to capture an image and generate metadata associated with the image. Such image metadata may include focus point metadata information that may be indicative of the potential focus points available to the camera as well as which one or more of those potential focus points were utilized to capture the image. As the location of a focus point used during image capture is generally intended to coincide with the location of the photographer's main area of interest within the image, such focus point metadata can be accessed during image editing and used to zoom in to the captured image at that focus point location. Performing a “smart-zoom” based on an image's focus point metadata may save time and reduce frustration during the image editing process.
Abstract: A method for representing virtual information in a view of a real environment comprises providing a virtual object having a global position and orientation with respect to a geographic global coordinate system, with first pose data on the global position and orientation of the virtual object, in a database of a server, taking an image of a real environment by a mobile device and providing second pose data as to at which position and with which orientation with respect to the geographic global coordinate system the image was taken. The method further includes displaying the image on a display of the mobile device, accessing the virtual object in the database and positioning the virtual object in the image on the basis of the first and second pose data, manipulating the virtual object or adding a further virtual object, and providing the manipulated virtual object with modified first pose data or the further virtual object with third pose data in the database.
Type:
Grant
Filed:
October 11, 2010
Date of Patent:
April 7, 2015
Assignee:
Metaio GmbH
Inventors:
Peter Meier, Michael Kuhn, Frank Angermann
Abstract: An information processing apparatus includes: a transmitting unit transmitting target data of a first format stored in the storing unit, the first format being capable of constituting a multi-page file; a converted-data acquiring unit acquiring, in unit of page, converted data of a second format, which is generated by an external apparatus based on the target data; a display unit displaying an image corresponding to each page based on the converted data; a converted-data storing unit storing the converted data; and a manipulation receiving unit configured to receive an input. If the manipulation receiving unit receives a designation of a page, and the converted data which is generated based on the designated page is not stored in the storing unit, the converted-data acquiring unit acquires the converted data generated based on the designated page of the target data in preference to the other converted data.
Abstract: A system and method uses an image manipulating application to define in an object image a plurality of discrete cells. Predefined image information is substituted for image information in selected ones of the plurality of discrete cells to form a translated version of the object image. The translated version of the object image may then be provided to an image recognition capable search engine to obtain search results.
Abstract: A method of displaying text on a path includes creating a mapping between distances along the path and points on a line based on changes in direction of the path, composing glyphs on the line, having a total line length defined in accordance with the mapping, to form a composed line, associating the glyphs with the path in accordance with the mapping and the composed line, and outputting the association of the glyphs with the path for display of the glyphs along the path.
Abstract: A mobile terminal device includes a display section having a display surface for displaying a screen including information, an accepting section which accepts a moving operation for moving the screen, and a display control section which controls the display section based on the moving operation. When the moving operation for moving an end of the screen inside the end of the display surface is performed, the display control section controls the display section so that the screen is deformed to a moving direction of the screen in a direction of movement of the screen by the moving operation.
Type:
Grant
Filed:
September 23, 2013
Date of Patent:
April 7, 2015
Assignee:
KYOCERA Corporation
Inventors:
Keiko Mikami, Tomoyo Yoshida, Hiroyuki Okuno, Masayuki Ono
Abstract: Methods and apparatus for interactive curve-based freeform deformation of three-dimensional (3-D) models may provide a user interface that allows a user to interactively deform 3-D models based on simple and intuitive manipulations of a curve drawn on the model (i.e., freeform deformation). The user may apply freeform deformations using touch and/or multitouch gestures to specify and manipulate a deformation curve. The deformations may be applied by deforming the space around a curve/sweep path and deforming the 3-D model accordingly. The freeform deformation methods are not dependent on manipulation of a fixed set of parameters to perform deformations, and may provide for both local and global deformation. One or more weights and user interface elements for controlling those weights may be provided that allow the user to control the extent (region of influence) of the freeform deformations along the curve and/or perpendicular to the curve.
Type:
Grant
Filed:
September 10, 2012
Date of Patent:
March 31, 2015
Assignee:
Adobe Systems Incorporated
Inventors:
Nathan A. Carr, Pushkar P. Joshi, Fatemeh Abbasinejad
Abstract: A method and system for defining a package uses a graph representation of the package to create and implement a package generation rule set. The graph representation uses links and nodes to represent the relationships between various facet, edges and functional elements of the package.
Abstract: Provided is an information processor, including a selection section that selects a set of content data satisfying a predetermined condition from a group of content data each associated with a piece of positional information representing a position in a feature space prescribed based on a predetermined feature amount as metadata, and a display format selection control section that selects a display format for displaying at least a part of the feature space and the set of the content data selected by the selection section in accordance with a display screen. The display format selection control section is configured to display an object which includes a direction indicator indicating a direction of existence of the set of the relevant content data within the display screen, and further integrate the relevant plurality of direction indicators into a new direction indicator to display the new direction indicator on the display screen.
Abstract: A system and method for generating a flow based on multiply types of interactions are provided. Data defining one or more sequences of multiple interactive nodes are received for the multiple interactive nodes, where each of the multiple interactive nodes corresponds to a particular type of interaction. One of the interactive nodes is designated as the starting interactive node and other interactive nodes are designated as intermediary interactive nodes, where the starting interactive node and at least one of the intermediary interactive nodes correspond to different types of interactions. Intermediary interactive nodes are connected to the starting interactive node based on the one or more sequences, where the connection includes one edge corresponding to a direct connection to the starting interactive node or multiple edges corresponding to an indirect connection via at least one other intermediary interactive node. Visualization data for the interactive nodes is generated and provided for display.
Type:
Grant
Filed:
June 27, 2012
Date of Patent:
March 31, 2015
Assignee:
Google Inc.
Inventors:
Jerry Hong, Fenghui Zhang, Lucas Visvikis Pettinati, Zhiting Xu, Lin Liao, Peng Li, Jiajing Wang, Jin Yao, Manuel Frank Martinez
Abstract: Systems and methods are provided for resolving seams in computer graphics when a two-dimensional image is applied to a three-dimensional structure. The method can include providing a two-dimensional image in a UV space, identifying at least one sub-image on the two-dimensional image, defining a seam connectivity for the two-dimensional image in the UV space, and remapping the location of an object on the two-dimensional image when the location of the object is within at least one seam boundary of the seam map.
Abstract: In a data processing system, an output surface, such as frame to be displayed, is generated as a plurality of respective regions with each respective region of the output surface being generated from a respective region or regions of one or more input surfaces. When a new version of the output surface is to be generated 80, for each region of the output surface it is determined which region or regions of the input surface or surfaces contribute to the region of the output surface 84 and then checked whether the contributing region or regions of the input surface or surfaces have changed since the previous version of the output surface region was generated 85. If there has been a change in the contributing region or regions of the input surface or surfaces since the previous version of the region in the output surface was generated 86, the region of the output surface is regenerated 87.
Type:
Application
Filed:
April 17, 2014
Publication date:
March 26, 2015
Applicant:
ARM LIMITED
Inventors:
Daren CROXFORD, Tom Cooksey, Lars Ericsson, Sean Tristram Ellis
Abstract: In a data processing system, an output surface, such as frame to be displayed, is generated as a plurality of respective regions with each respective region of the output surface being generated from a respective region or regions of one or more input surfaces. When a new version of the output surface is to be generated 80, for each region of the output surface it is determined which region or regions of the input surface or surfaces contribute to the region of the output surface 84 and then checked whether the contributing region or regions of the input surface or surfaces have changed since the previous version of the output surface region was generated 85. If there has been a change in the contributing region or regions of the input surface or surfaces since the previous version of the region in the output surface was generated 86, the region of the output surface is regenerated 87.
Type:
Application
Filed:
September 20, 2013
Publication date:
March 26, 2015
Applicant:
ARM LIMITED
Inventors:
Daren CROXFORD, Tom COOKSEY, Lars ERICSSON