Abstract: The present disclosure relates to a control system for user-guided robotic control of a medical device and includes an electronic control unit, a computer-readable memory coupled to the ECU, and a visualization system configured to provide a view of an anatomical model. The memory contains user interface logic configured to be executed by the ECU, and configured to obtain input from a touch screen display with respect to the view of an anatomical model. Control logic stored in the memory is also configured to be executed by said ECU and is configured to produce an actuation control signal responsive to the input to control actuation of a manipulator assembly so as to move the medical device.
Type:
Grant
Filed:
March 31, 2011
Date of Patent:
February 13, 2018
Assignee:
St. Jude Medical, Atrial Fibrillation Division, Inc.
Inventors:
Eric S. Olson, John A. Hauck, Nicholas A. Patronik, Cem M. Shaquer
Abstract: There is provided an image capturing control apparatus. A selection unit selects, as a setting relating to shooting of a moving image, any of a plurality of settings having respectively different ratios of a shooting time to a playback time. A display control unit performs control: such that a predetermined shooting time and a playback time that corresponds to the predetermined shooting time and is different for each setting are displayed for a setting in which a shooting time is shorter than a playback time, out of the plurality of settings; and such that a predetermined playback time and a shooting time that corresponds to the predetermined playback time and is different for each setting are displayed for a setting in which a shooting time is longer than a playback time, out of the plurality of settings.
Abstract: One or more representative images extracted from an image group comprising a plurality of images is/are displayed. A part or all of the representative image or images, such as a main subject region or a background region including a search target, is/are selected from the representative image or images, and used for setting search conditions. The image group is searched for an image or images agreeing with the search conditions having been set.
Abstract: A method includes maintaining images and associated video streams; detecting a swipe gesture of a user on a touch sensitive display, wherein the swipe gesture comprises direction and speed information; and selecting an image based on the direction information of the swipe gesture. The method includes adjusting a playback of an associated video stream based on the direction and the speed information of the swipe gesture; providing the associated video stream on the display during the swipe gesture, for creating a motion effect relating to the image; and providing the image on the display after the swipe gesture.
Abstract: In a composite image generation apparatus that is mounted in an own vehicle, each of captured images, which have been captured by a plurality of imaging units, are acquired. A disturbance level for each of the plurality of captured images that have been acquired is determined. The disturbance level indicates: whether or not a disturbance is present in the captured image; or an extent of the disturbance. In an overlapping area in which imaging areas of the plurality of captured images overlap, one or more captured images is selected from the plurality of captured images so that the area occupied by a captured image having a higher disturbance level among the plurality captured images is smaller. A composite image is generated based on the plurality of captured images. In the overlapping area, the composite image is generated by using the captured image selected based on the disturbance level.
Abstract: A method for video editing using a mobile terminal and a remote computer is disclosed. A user selects a user video to edit using a mobile application of the mobile terminal. The user selects a visual effect and parameters of the visual effect using the mobile application. Subsequently, the mobile application provides a preview of the visual effect superimposed over the user video using a series of still images representing the visual effect. When the user confirms the preview, the mobile terminal generates a request for video editing and sends the request to a server. The request includes identification of the visual effect for combining the visual effect and the user video as confirmed by the preview. Based on the request from the mobile terminal, the server combines a video clip of the visual effect and the user video into a resulting video.
Abstract: A geodatic surveying device which is equipped with an automatic target point sighting functionality for determining the position of a target point. A reticle pattern that corresponds to the outer shape of the known reticle is stored, wherein a main point of the reticle pattern is predefined as indicating the target point. In order to carry out an automatic target point sighting functionality, the evaluation means are designed such that, after the function start, a camera image of the reticle is automatically recorded, the reticle pattern is aligned with the reticle in the camera image by means of image processing and, depending on a position of the main point in the camera image in the matched state of the reticle pattern, the orientation of the sighting apparatus is changed in a motorized manner such that the optical target axis OA is oriented with high precision at the target point.
Abstract: The present technology relates to a reproduction device, a reproduction method, and a recording medium capable of displaying graphics with a broader dynamic range of luminance and appropriate brightness. The reproduction device reads a Tone_map stream and a graphics stream of an extended graphics from a recording device, wherein the recording device records the Tone_map stream including HDR information indicating a luminance feature of the extended graphics which are first graphics with a first luminance range different from and broader than a second luminance range, and a luminance conversion definition information used in luminance conversion from the extended graphics to standard graphics. The standard graphics are graphics with the second luminance range. The reproduction device converts the extended graphics into the standard graphics based on the luminance conversion definition information of the extended graphics included in the Tone map stream.
Abstract: A display apparatus connected to an external apparatus that executes a recording process for image data. A display panel displays an image based on received image data. A memory storing a program executed by the processor causes the display apparatus to set an image-quality adjusting parameter for the received image data, to execute an image-quality adjusting process on the received image data using the image-quality adjusting parameter set for the received image data, to detect a start of the recording process for the image data executed by the external apparatus, and to perform control in which, in response to the detection of the start of the recording process for the image data executed by the external apparatus, the image-quality adjusting parameter that has been set for the received image data is automatically recorded in a storage as an image-quality adjusting parameter corresponding to the data recorded by the external apparatus.
Abstract: Systems and methods for customizing video include providing a portion of video to an electronic display and identifying a character or personality in the portion of video. A request to perform an action regarding the portion of video may be detected and the action may be associated with the identified character or personality. The action may be performed on a second portion of video in response to the character or personality being identified in the second portion of video.
Type:
Grant
Filed:
December 29, 2014
Date of Patent:
November 21, 2017
Assignee:
Google Inc.
Inventors:
Dean Kenneth Jackson, Daniel Victor Klein
Abstract: A recorder receives designation of a video which is desired to be reproduced from a user. If designation of one or more designated locations where sound is emphasized on a screen of a display which displays the video is received by the recorder from the user via an operation unit during reproduction or temporary stopping of the video, a signal processing unit performs an emphasis process on audio data, that is, the signal processing unit emphasizes audio data in directions directed toward positions corresponding to the designated locations from a microphone array by using audio data recorded in the recorder. A reproducing device reproduces the emphasis-processed audio data and video data in synchronization with each other.
Abstract: It is determined whether an angle of view is changed or an image with an aspect ratio different from the aspect ratio of the moving image is inserted in a moving image recorded in a temporally continuous manner (system control unit), and when it is determined that an angle of view or an aspect ratio is changed, a predetermined effect is applied to the moving image (image processing unit).
Abstract: A head-mounted display includes a playback processor which plays back a moving image from first time until second time, a box for placing a container containing a perfume inside, a filled section which is temporarily filled with the perfume and emits the temporarily filled perfume according to starting playback of the moving image, and a tubular vent hole which is in contact with a nose of a user when the user wears the head-mounted display.
Type:
Grant
Filed:
May 10, 2015
Date of Patent:
October 31, 2017
Assignee:
PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
Abstract: A method for video browsing includes comparing a current image frame with a previous image frame prior to the current image frame in a video to obtain target block information, identifying the current image frame as a keyframe if the target block information satisfies a predetermined condition, and playing the keyframe.
Abstract: An athletic training system (200) has a data recording system (202) and a data engine (204). The data recording system (202) is configured to record an athletic competition event. The event may have a first team of players competing against a second team of players. The data engine (204) is configured to receive data associated with the recorded athletic competition event. The data engine (204) processes the data and displays the data as a replay of the event in animated form.
Abstract: A method and system for providing immersive user experience in a social experience (SE) environment by allowing users to create content Bookmarks. The SE environment amalgamates content received from various sources available in the network. The method streams the amalgamated content to the users through an SE server. The SE server provides enhanced experience service to the users by allowing the users to store, retrieve, and share the created Bookmarks with other users.
Abstract: A method for recording media content based on media content fingerprints is described. A media device records content received in a content stream from a scheduled start time of a media content to a scheduled end time of the media content to create a content recording. The media device derives one or more fingerprints from the content recording. A fingerprint database is queried with the one or more fingerprints to determine that a first portion of the content recording comprises an advertisement and a second portion of the content recording comprises the media content. In response to a request to play the content recording, starting playback of the content recording at the beginning of the second portion of the content recording.
Abstract: A method for video editing using a mobile terminal and a remote computer is disclosed. A user selects a user video to edit using a mobile application of the mobile terminal. The user selects a visual effect and parameters of the visual effect using the mobile application. Subsequently, the mobile application provides a preview of the visual effect superimposed over the user video using a series of still images representing the visual effect. When the user confirms the preview, the mobile terminal generates a request for video editing and sends the request to a server. The request includes identification of the visual effect for combining the visual effect and the user video as confirmed by the preview. Based on the request from the mobile terminal, the server combines a video clip of the visual effect and the user video into a resulting video.
Abstract: There is provided an information processing apparatus including a position change detecting unit that detects a position change of an operating body on a screen, a playback state control unit that controls a playback state of a content, and a display control unit that at least displays a part or all of a text list in which text data items associated with elapsed times in a playback of the content are sorted in an order of the elapsed times, on the screen. The playback state control unit controls the playback state of the content in response to a continuous position change of the operating body detected by the position change detecting unit on the text list displayed by the display control unit.