Abstract: A computer arrangement is disclosed, including a processor and a memory that stores a computer program, object data originating from a first source and including object location data, and laser samples originating from a second source, including a sub-set of laser samples relating to the object and including laser sample location data as to each laser sample. In at least one embodiment, the processor compares the object location data and the laser sample location data of the sub-set of laser samples, and matches the object location data to the laser sample location data of the sub-set of laser samples based on this comparison, and thereby corrects for relative positional errors between the first and second sources of location data. The object may be a building façade, for example.
Type:
Grant
Filed:
October 20, 2006
Date of Patent:
November 11, 2014
Assignee:
TomTom Global Content B.V.
Inventors:
Marcin Michal Kmiecik, Wojciech Tomasz Nowak
Abstract: Provided is an information processing apparatus including a position determination unit for determining whether or not a position of an information input area included in a display area of a first application and a position of an information output area included in a display area of a second application satisfy a predetermined first positional relationship, and a coordination processing unit for inputting, in a case it is determined that the predetermined first positional relationship is satisfied, output information of the second application that is associated with the information output area to the first application as input information associated with the information input area.
Abstract: The embodiments provide a data processing apparatus including a graphics controller configured to obtain a subset of selected time zones among a plurality of available time zones and generate a plurality of clock objects. Each clock object may be a graphical representation of a different time zone of the selected subset, and each clock object may provide a visual graphical indicator for a respective time zone. The data processing apparatus may include a user interface configured to display an arrangement of the plurality of clock objects and receive a command shifting to a travel mode time. The graphics controller may be configured to update the plurality of clock objects according to the command including providing an updated local time corresponding to the travel mode time for each selected time zone and adjusting the visual graphical indicator according to the updated local time for each selected time zone.
Abstract: An acquisition unit acquires a content image corresponding to content. A content image display unit arranges a plurality of content images side by side in a display screen image, and a related information display unit displays information related to an arranged content image. A first reception unit acquires a first moving instruction for the content images arranged side by side, and a second reception unit acquires a second moving instruction for the content images arranged side by side. The content image display unit moves the content images according to a moving instruction acquired by the first reception unit or the second reception unit. A related information display unit displays different types of related information when the first reception unit acquires the first moving instruction and when the second reception unit acquires the second moving instruction.
Type:
Grant
Filed:
January 18, 2013
Date of Patent:
October 21, 2014
Assignees:
Sony Corporation, Sony Computer Entertainment Inc.
Abstract: A system, method, and computer program product are provided for determining one or more contact points between a pair of objects. In operation, a first contact normal is identified between a pair of objects at a first position. Additionally, a relative velocity of the pair of objects is determined at the first position. Furthermore, one or more contact points between the pair of objects are determined at a second position through a translational analysis, utilizing the first contact normal and the relative velocity.
Abstract: The display control device is a display control device which receives a signal from the operating device having a touchpad and generates screen data to be displayed on a screen, the display control device including: a touch information detecting unit (201) which detects touch information including position information about a position on the touchpad that is touched with a finger of a user during operation of the touchpad; a direction determining unit (203) which determines a direction intended by the user, using a characteristic which is indicated by the touch information detected by the touch information detecting unit (201) and which results from the operation on the touchpad with the finger; and a screen data generating unit (205) which generates the screen data depending on the direction determined by the direction determining unit (203).
Type:
Grant
Filed:
January 19, 2011
Date of Patent:
October 14, 2014
Assignee:
Panasonic Intellectual Property Corporation of America
Abstract: A dual projection system can align displayed images using an alignment pattern. The alignment pattern can be used to modify part of an image content frame of an image data sequence. Two image data sequences can be spatially aligned based on the modification to part of the image content frame using the alignment pattern. An image content frame may be warped and displayed. The displayed warped image content frame can be captured. A set of transformation vectors can be determined based on the captured image content frame and alignment image data. Stored transformation data can be updated using the set of transformation vectors and the updated transformation data can be used to spatially align two image data sequences.
Type:
Application
Filed:
October 19, 2012
Publication date:
October 2, 2014
Inventors:
Steen Svendstorp Iversen, Weining Tan, Matthew O'Dor
Abstract: An apparatus for controlling a display screen including a touch-sensitive panel generating position signals representing a set of positions of a single continuous touch activation between a first time and a second time; and a processor coupled to the panel. The processor configured to: process the signals to detect first and second characteristics of the set; and generate output signals causing a display screen to initiate first and second operations corresponding to the first and second characteristics.
Abstract: A touch screen includes a display unit and an operation input receiving unit that receives a touch. A display control unit allows a state of display on the display unit to transit from a first state in which an image is displayed as a front side of a card to a second state in which an additional information image is displayed as a rear side of the card. In response to a determination that the operation input receiving unit has detected in the first state that a touched location is a border of the image, an additional information access entrance is displayed at the border. When the operation input receiving unit detects a tracing operation in which the touch moves by a predetermined distance while touching the operation input receiving unit, the tracing operation starting from the additional information access entrance, the state of display is forced to transit.
Abstract: Techniques for image stabilization may include detecting motion of an apparatus configured to display image data, the image data comprising one or more frames, a first frame of the one or more frames comprising a plurality of layers. The plurality of layers may be processed to correct for the detected motion. The processing may comprise applying a different degree of motion correction to a first layer of the plurality of layers than to a second layer of the plurality of layers. Such techniques may be performed via an apparatus comprising a display control unit configured to cause the image data to be displayed, and a motion correction unit configured to perform the processing.
Abstract: Methods and systems for active stabilization for heads-up displays are described. A wearable computing device may include a head-mounted display (HMD) with an eye-tracking system. The wearable computing device may generate a display of content at a given location in a display area of the HMD. A user may be wearing the HMD and may be subjected to mechanical jostling resulting in a movement of the HMD with respect to a gaze axis of an eye of the user. The wearable computing device may receive information relating to the gaze axis from the eye-tracking system and may receive information relating to the movement of the HMD from sensors coupled to the HMD. The wearable computing device may adjust the given location of the displayed content in the display area to compensate for such movement. The content may thus appear stable to the user.
Abstract: An interactive digital map of a geographic area is provided via a viewport that defines a visible portion of the digital map. The digital map includes a representation of a structure. The viewport is panned relative to the digital map in response to receiving a user request. An indication that indoor map data is available for the structure is displayed in response to determining that the viewport is panning, and the indication is removed in response to determining that the viewport has stopped panning.
Abstract: Light detection and ranging (LIDAR) imaging systems, method, and computer readable media for generating super-resolved images are described. Super-resolved images are generated by obtaining data sets of cloud points representing multiple views of an object where the views have a view shift, enhancing the views by duplicating cloud points within each of the data sets, compensating for the view shift using the enhanced views, identifying valid cloud points, and generating a super-resolved image of the object by integrating valid cloud points within the compensated, enhanced views.
Abstract: A method, a system, and a program for high-fidelity three-dimensional modeling of a large-scale urban environment, performing the following steps: acquiring imagery of the urban environment, containing vertical aerial stereo-pairs, oblique aerial images; street-level imagery; and terrestrial laser scans, acquiring metadata pertaining to performance, spatial location and orientation of imaging sensors providing the imagery; identifying pixels representing ground control-points and tie-points in every instance of the imagery where the ground control-points and tie-points have been captured; co-registering the instances of the imagery using the ground control-points, the tie-points and the metadata, and referencing the co-registered imagery to a common, standard coordinate system.
Abstract: An image forming apparatus displays, on a display section, a plurality of button areas to which setting values in relation to functions of the image forming apparatus are assigned. In a case that an input form of an instruction coordinate to the display section is a predetermined first form, the setting values assigned to a button area in which the instruction coordinate is positioned are displayed on the display section. In a case that after the setting values are displayed and further that change operation for a specific setting value is inputted, the specific setting value is changed. In a case that the instruction coordinate is positioned in a button area of the plurality of button areas and that the input form of the instruction coordinate is a predetermined second form, the image forming apparatus is operated based on the setting values assigned in the button area.
Abstract: An image display apparatus and an image display method where the image display apparatus according to an embodiment displays a main screen and a sub-screen having a different depth or slope from the main screen so as to create the illusion of depth and distance.
Type:
Grant
Filed:
October 15, 2010
Date of Patent:
August 12, 2014
Assignee:
LG Electronics Inc.
Inventors:
Kyung Hee Yoo, Sang Jun Koo, Sae Hun Jang, Uni Young Kim, Hyung Nam Lee
Abstract: Predetermined image processing is performed in accordance with an input operation performed by an input device having image pickup means for taking an image of one or a plurality of imaging targets. Target image data, which is obtained from one target image of the one imaging target or a plurality of target images of the plurality of imaging targets in the image taken by the image pickup means and which indicates a distance between the plurality of target images or a size of the one target image, is sequentially obtained. A display image is enlarged and reduced in accordance with a change in the target image data. Then, the display image processed in such a manner is displayed on a display device.
Abstract: An image transmission method includes using a computer processor, acquiring operation information at a transmission device from another device that instructs to scroll an image; determining a scroll direction and a scroll speed based on the operation information; generating a moving image from a plurality of images that are sequentially displayed on a screen of the other device at a display time interval when an image displayed on the screen of the other device is scrolled to the scroll direction at the speed for a time; and transmitting the moving image to the other device.
Abstract: Methods and apparatus are provided for dynamically rendering, on a moving map display having a viewable area, a label associated with a bounded region. Moving map data are processed to determine if the bounded region has a viewable boundary, and to determine a perimeter of one or more polygons that are each defined by intersections of the viewable boundary of the bounded region and the viewable area. Commands are supplied to the moving map display that cause the moving map display to continuously render the label associated with the bounded region at a position within the one or more polygons that is closest to a predetermined point on the moving map display.
Abstract: An embodiment can include an interactive memory map that includes a graphical representation of a region of memory used by a program. The memory map may dynamically update as the program executes and may provide a user with indicators that identify how the program interacts with the memory. The indicators may identify memory locations that are being written by the program and/or memory locations that are being read by the program while the program executes. The memory map may assist a user in understanding how the executing program interacts with memory. The interactive memory map may further allow the user to manipulate how information is stored in the memory by allowing the user to select, add, remove, modify, move, etc., program information stored in the memory.
Type:
Grant
Filed:
August 31, 2009
Date of Patent:
July 15, 2014
Assignee:
The MathWorks, Inc.
Inventors:
Robyn Arthur Jackey, Arvind Suresh Hosagrahara
Abstract: An electronic system for displaying a three-dimensional simulation scenario comprising a calculation unit configured so as to generate a three-dimensional simulation scenario, a display device for displaying the three-dimensional simulation scenario, a virtual video camera set in a displacement plane positioned in turn within the three-dimensional simulation scenario, a pointer set in the displacement plane, a control system configured in such a way as to displace the pointer in the displacement plane in response to a manual action of the user, a processing device configured in such a way as to control a roto-translation of the virtual video camera in the displacement plane in response to a translation of the pointer in the displacement plane itself.
Abstract: A method and a mobile device to display a specific image at the highest layer of a screen are provided. The mobile device displays moving images at a first region of a screen and, if an event to perform a function in a second region of the screen is received, determines a second region for displaying a specific image associated with the event. The mobile device determines whether the second region is overlapped with the first region. If the second region is overlapped with the first region, the mobile device determines a third region not overlapped with the second region. Thereafter, the mobile device displays the specific image at the second region and displays the moving images at the third region.
Abstract: An information processing apparatus includes a first selection unit configured to select a first object from a plurality of objects displayed on an operation screen, a second selection unit configured to select a second object which is an object different from the first object and is used as a reference when the drawing order of the first object is changed, a drawing order determination unit configured to determine whether the first object is drawn in front of the second object when the first and second objects are drawn according to the drawing order of the plurality of objects, and a drawing order changing unit configured to change the drawing order of the plurality of objects so that the first object is drawn behind the second object if the drawing order determination unit determines that the first object is drawn in front of the second object.
Abstract: A content managing device is provided, which can allow a user to intuitively perform generation of, using an operation that is similar to an operation of sorting content items or to an operation of adding attribute information to content items, a new sort-destination region into which content items are to be sorted. A first display object indicating a content item and a region in which the first display object is to be stored are displayed on a display unit. A position to which the first display object is moved is determined. When the determined position is located in the region, the first display object is stored in the region. When the determined position is not located in the region, a new region is displayed at the position to which the first display object was moved, and the first display object is stored in the new region.
Abstract: Embodiments are directed to displaying data items in a carousel display panel and to efficiently presenting virtualized data in a carousel display panel. In one example, a computer system accesses a list of data items that include at least a first data item and a last data item which are to be displayed in a carousel display panel. The computer system displays the selected portion of data items in the carousel display panel and receives a user input indicating that the last data item in the list is to be displayed in the carousel display panel. The computer system then rotates the data items displayed in the carousel display panel to the last data item. The last data item is thus displayed, along with at least a portion of a second-to-last data item and the first data item in the list.
Type:
Grant
Filed:
November 2, 2011
Date of Patent:
June 24, 2014
Assignee:
Microsoft Corporation
Inventors:
Sonal Jain, Terry A. Adams, Mikhail Shatalin, Hamid Mahmood
Abstract: This method is an improvement to a method used to enlarge the display of a first portion of a map, without hiding a first peripheral portion of this map. This improvement involves detecting an event which appears in said first peripheral portion (SPP4?). The improvement further involves defining a second portion to be enlarged (SPA5), centered on the position (BS2) of this event, defining a second peripheral zone (SPP5) associated with this second portion to be enlarged, displaying (SPA5?) the second portion to be enlarged (SPA5) by applying respective enlargement ratios greater than 100% to at least some elements of that portion, so as to make the event more legible, and displaying (SPP5?) the second peripheral portion (SPP5) by applying at least to some elements of this portion respective enlargement ratios lower than 100% so as to save surface area to avoid hiding a portion of the map.
Type:
Grant
Filed:
November 13, 2009
Date of Patent:
June 24, 2014
Assignee:
Alcatel Lucent
Inventors:
Jean-Roch Houllier, Alain Brethereau, Béatrix De Mathan
Abstract: A method of operating a client device within a viewing environment is described. The method includes: receiving content at a client device, presenting the content to a viewer by rendering the content as rendered content on a display surface in operable communication with the client device; receiving engagement data at the client device, the engagement data indicating a level of engagement with the content of at least one user who is viewing the rendered content; and adapting presentation of the content in dependence on the engagement data by changing how the content is rendered on the display surface. Related systems, apparatus, and methods are also described.
Type:
Application
Filed:
May 10, 2012
Publication date:
June 19, 2014
Applicant:
Cisco Technology Inc.
Inventors:
Alex Ashley, Laurent Chauvier, Nicolas Gaude, Hugo Latapie, Kevin A. Murray, Simon John Parnall, James Geoffrey Walker, Neil Cormican, Simon Dyke, Vincent Sattler, Alex Ruelle, Jonathan Pollen, Meir Gerenstadt
Abstract: A control device includes a processor and a control button. The control button includes a key and an analog sensor coupled to the processor. The key is configured to be moved in a first direction and a second direction, which is substantially opposite from the first direction. The analog sensor is configured to detect an amount of movement of the key in the first direction and the second direction and send a control signal to the processor to indicate the amount of movement. Based on the control signal, the processor is configured to control zoom of a graphical object displayed on a computer monitor at a rate that is based on the amount of movement.
Abstract: A system comprising multiple devices that are operable when servicing a device-under service is described. A DAQ device is operable to generate input data from input signals received from the device-under-service and to transmit the input data to a display device via a wireless network. The DAQ device comprises multiple buffers to store the input data. One of the buffers stores two frames of the input data, which can be live input data or historical input data. At any one time, a display at the display device visually presents one frame of input data for each respective input channel of the DAQ device. At any one time, a display at the DAQ device visually presents a half-frame of input data for each respective input channel of the DAQ device. Another buffer at the DAQ device can store more than two frames of input data including historical input data.
Abstract: A method, system, and computer program product graphically display attributes associated with virtual images. A set of attributes associated with each virtual image in a plurality of virtual images is analyzed. At least one graph including a plurality of nodes is generated. Each node in the plurality of nodes represents one virtual image in the plurality of virtual images. Each node is graphically displayed with at least one visual indicator. The at least one visual indicator represents at least one attribute in the set of attributes associated with the virtual image represented by the node.
Type:
Grant
Filed:
October 28, 2011
Date of Patent:
June 17, 2014
Assignee:
International Business Machines Corporation
Inventors:
Wim De Pauw, Herbert M. Lee, Peter K. Malkin
Abstract: Some techniques for providing tiles of dynamic content include a service that determines a generation time and update time in response to receiving a request for a particular tile, and that returns the particular tile. The generation time is when the particular tile of dynamic content was most recently generated based on particular vector data associated with the particular tile. The update time is when the particular vector data was most recently updated. The particular tile is generated based on the particular vector data in response to determining that the generation time is not later than the update time. Some techniques include a client that receives data that indicates an estimated time to complete generation of a tile in response to sending a first request for the tile. A second request for the tile is sent at a time based at least in part on the estimated time.
Abstract: In general overview, the present disclosure is directed to a system and method for selectively displaying a frame of an application user interface on a mobile computing device. A user interface analyzer on a mobile computing device analyzes a user interface for an application executing on a remote server. The user interface analyzer identifies frames in the user interface, the positions of the frames, relationships between frames, and horizontal and vertical panning offsets to move between adjacent frames. The mobile computing device receives a user input requesting the display of an adjacent frame. Based on the information the user interface analyzer obtained, the mobile computing device displays an adjacent frame of the user interface.
Type:
Grant
Filed:
May 16, 2011
Date of Patent:
May 20, 2014
Assignee:
Citrix Systems, Inc.
Inventors:
Gus Pinto, Adam Marano, Ruiguo Yang, Christopher Fleck
Abstract: In one embodiment, when an image is displayed on an electronic device, the image may be panned from one portion to another portion based on information associated with the image or a viewer of the image.
Abstract: An ultrasound imaging system and method includes receiving a first ultrasound image of a region-of-interest (ROI) and associated first ECG data, the first ultrasound image including an M-mode image or a spectral Doppler image. The system and method includes receiving a cine loop of B-mode images acquired from the ROI and second associated ECG data. The system and method includes selecting a first phase and displaying, at the same time, a first one of the B-mode images at the first phase, the first ultrasound image, and a marker. The marker is positioned at a first position with respect to the first ultrasound image, the first position indicating the first phase.
Abstract: A video display pipe used for processing pixels of video and/or image frames may include edge Alpha registers for storing edge Alpha values corresponding to the edges of an image to be translated across a display screen. The edge Alpha values may be specified based on the fractional pixel value by which the image is to be moved in the current frame. The video pipe may copy the column and row of pixels that are in the direction of travel, and may apply the edge Alpha values to the copied column and row. The edge Alpha values may control blending of the additional column and row of the translated image with the adjacent pixels in the original frame, providing the effect of the partial pixel movement, simulating a sub-pixel rate of movement.
Type:
Grant
Filed:
February 14, 2011
Date of Patent:
April 29, 2014
Assignee:
Apple Inc.
Inventors:
Joseph P. Bratt, Peter F. Holland, Gokhan Avkarogullari
Abstract: An input coordinate point is obtained from a pointing device, and either one of first and second controls is selected in accordance with an operation performed by a user. When the first control is selected, a movement vector is calculated based on the input coordinate point and a predetermined coordinate point, and a display area of a virtual space, which is displayed on a display device, is moved. Alternatively, when the first control is selected, the movement vector is calculated based on the input coordinate point and predetermined coordinate point, and an object is moved within the virtual space. When the second control is selected, the object is moved to a position in the virtual space, the position corresponding to the input coordinate point. Then, the display device is caused to display the virtual space within the display area.
Abstract: A direction and distance of movement of a display device as well as of a user of the display device are determined. Based on these determined directions and distances of movement, compensation to apply to content displayed on the display device to compensate for movement of the user with respect to the device is determined and applied to the content. A portion of the display device at which the user is looking can also be detected. The compensation is applied to the content only if applying the compensation would not result in the portion being positioned beyond the display device. If applying the compensation would result in the portion being positioned beyond the display device then appropriate corrective action is taken, such as not applying the compensation to the content.
Abstract: Information in a diagram is logically structured using lists, containers, and callouts without requiring the diagram author to explicitly define a structure or map any diagram contents to a structure. Logical relationships are inferred based on actions associated with shapes, groupings, and attributes of shapes/groupings taken by the author. Feedback mechanisms are provided to communicate an underlying structure to the author. Intelligent behaviors are enabled to expose manipulation of diagrams based on their logical structure.
Type:
Application
Filed:
December 31, 2013
Publication date:
April 24, 2014
Applicant:
Microsoft Corporation
Inventors:
Mark Nelson, Mike Woolf, Heidi Munson, David Bradlee, Evan Moran
Abstract: According to the image display method, apparatus, and program of the present invention, when a user moves a display location to partially display text that is described in an image over a wide area, the burden of the user in performing return operations with respect to the screen display can be reduced by freely selecting and storing a return destination of the screen from among displayed partial regions in accordance with a user instruction, and thereafter returning to the display of the stored partial region in accordance with an instruction of the user.
Abstract: A first image and a second image of the first image. A display to display the first image and the second image. A sensor to detect an input relative to the display. A processor to determine a task to perform based on the input relative to the first image or the second image on the display.
Abstract: Devices and methods for improving viewing perspective of content displayed on the display screen of a computing device include determining one or more viewing angles relative to a viewer of the content, generating a content transformation to apply a corrective distortion to the content to improve the viewing perspective when viewed at the one or more viewing angles, and rendering the content as a function of the content transformation. The viewing angles relative to a viewer of the content may be determined automatically using viewer location sensors, or may be input manually by the viewer. The content transformation visually scales the content by an appropriate factor to compensate for visual distortion experienced by the viewer at one or more viewing angles. Content may be transformed as a function of a single approximate viewing angle or multiple viewing angles.
Type:
Application
Filed:
September 28, 2012
Publication date:
April 3, 2014
Inventors:
Joshua Boelter, Don G. Meyers, David Stanasolovich, Sudip S. Chahal
Abstract: Techniques may be used to accommodate occlusion. An occlusion accommodation application may determine a display position of a display of a computing device, an eye position of an eye of a user and an object position of an object. The object may be positioned between the display and the eye of the user. The occlusion accommodation application may identify, in real-time, an occluded area based on the display position, object position and the eye position.
Abstract: An apparatus for enabling provision of control over a device display based on device orientation may include at least one processor and at least one memory including computer program code. The at least one memory and the computer program code may be configured, with the processor, to cause the apparatus to perform at least receiving an indication of data associated with a first potential display view, receiving orientation information indicative of an orientation angle of a device including a display, and enabling provision of a display view at the display that includes a variable portion the first potential display view based on the orientation angle. A corresponding method and computer program product are also provided.
Abstract: An image processing device includes an image generation unit generating a computer graphics image on the basis of computer graphics description data, an image mapping unit texture-mapping an input image to a surface of a computer graphics object drawn by the image generation unit, and a coordinate setting unit undergoing a change manipulation of a texture coordinate and storing contents of the change manipulation, in which the image mapping unit performs texture mapping by a text coordinate which is changed on the basis of contents of the change manipulation which are stored in the coordinate setting unit when texture-mapping the input image to the surface of an object.
Abstract: An information processing apparatus includes a display processing unit, a setting unit, and an automatic shifting unit. The display processing unit is configured to display, on a display unit, at least a part of a pathological image as a display area. The setting unit is configured to receive, as information necessary to move the display area so as to scan the pathological image, at least information on a position of the display area in the pathological image and information on a method of moving the display area, which are set by a user. The automatic shifting unit is configured to sequentially move the display area based on the set information.
Abstract: A mobile terminal may include a projection device to project, from a first position of the mobile terminal, visual content onto a projection surface to form a projected image; a sensor to sense movement information indicative of displacement of the mobile terminal with respect to the first position; and a controller to activate, based on the displacement, at least one of a zoom mode or a pan mode, wherein the projection device is further configured to perform, in the at least one of the zoom mode or the pan mode, at least one of zooming or panning with respect to the projected image.
Type:
Grant
Filed:
December 16, 2010
Date of Patent:
February 25, 2014
Assignees:
Sony Corporation, Sony Mobile Communications AB
Abstract: New techniques improving display output and computer system input response are provided. In aspects of the invention, a system assesses whether input gesture(s) occur within an area of an output matrix describing an element, within a time period following a substantial and/or activity-affecting change in that area, and nullifies or alters an affect(s) of the input gesture(s) that would otherwise occur, and may instead or also create the affect that would occur if the output matrix had not experienced the relevant substantial and/or activity-affecting change, and may also reverse, alter, augment or otherwise address the substantial and/or activity-affecting change in that area of the output matrix to enhance the user experience. In other aspects, an object-based projection method increases efficiency and decreases output matrix judder. In additional aspects, a new form of pixel and array, with variably-angled variably-curved pixel subsections, assists in further smoothing edges between objects.
Abstract: An image processing apparatus has stored therein in advance, as image conversion parameters to coordinate-convert images acquired by in-vehicle cameras incorporated at different positions in an own vehicle, directions connecting between a sight-line starting position of a driver and predetermined positions of the own vehicle, values of a depression angle from the sight-line starting position of the driver, and a range that the driver is caused to visualize, for each of the in-vehicle cameras, corresponding to a state of the own vehicle. The image processing apparatus receives from the driver an input of a display output condition, and determines a current state of the own vehicle. The image processing apparatus acquires the image conversion parameters based on the display output condition and the current state of the own vehicle, and converts images captured by the in-vehicle cameras, by using the acquired image conversion parameters, and outputs the images.