Abstract: A system that can store electronic program guide information using 3D graphics is disclosed. In a particular embodiment, a data filter and a text-to-image converter are used for converting filtered data into a set of digital images that are defined as a set of texture maps. In order to apply those texture maps, a memory analyzer analyzes the set-top box layout and indicates available memory types. The memory analyzer controls a memory distributor for distributing texture maps into the appropriate types of memory.
Abstract: A method and apparatus of displaying an Electronic Programming Guide (EPG). In one embodiment, an EPG is displayed in a three dimensional virtual mesh, in which independent objects representing television programs are situated. The simplified nature of the three dimensional EPG reduces the amount of processing necessary to display it. In addition, the virtual mesh may be displayed isometrically, so that hardware requirements are further reduced and it may be possible to use a software only three dimensional graphics pipeline. If a user has a set top box (STB) with a hardware accelerated graphics pipeline, the EPG may be displayed in a full three dimensional perspective view. A user can navigate the mesh to find television programs that they wish to view. A user can assign values to types of television programs that they prefer, and these programs will be displayed more prominently.
Abstract: A method of tracking objects that allows objects to be tracked across multiple scene changes, with different camera positions, without losing track of the selected object.
In one embodiment, a method of tracking an object using a computer, a display device, a camera, and a camera tracking device, the computer being coupled to the display device, the camera and the camera tracking device is disclosed. The method includes: A first image from within a field-of-view of the camera is captured. The first image, which includes an actual object with a tracking device, is displayed on the display device. Information about the tracking device's location is received. The information is used to create a virtual world reflecting the actual object's position within the field-of-view of the camera as a shape in the virtual world. Information about the camera tracking device is received. A virtual-camera position in the virtual world is created.