For Obtaining An Image Which Is Composed Of Images From A Temporal Image Sequence, E.g., For A Stroboscopic Effect (epo) Patents (Class 348/E5.054)
-
Patent number: 12212858Abstract: Systems and methods are provided for performing image processing. An exemplary method includes: receiving neuromorphic camera data from at least one neuromorphic camera; producing, using the neuromorphic camera data, a plurality of unaligned event data, where the unaligned event data are unaligned in time; aligning the unaligned event data in time producing aligned event data; generating, using a model and the aligned event data, a plurality of aligned event volumes; and aggregating the aligned event volumes to produce an image.Type: GrantFiled: April 25, 2023Date of Patent: January 28, 2025Assignee: Raytheon CompanyInventors: Kin Gwn Lore, Ganesh Sundaramoorthi, Kishore K. Reddy
-
Patent number: 12125168Abstract: A photographing method, device and system, and a non-transitory computer-readable storage medium are disclosed. The photographing method may include: sending a first image or a feature thereof to a cloud server, where the first image is acquired through a built-in camera of the mobile terminal (100); and receiving a third image from a cloud server, which is determined by the cloud server, in response to meeting a first preset condition based on a second image captured by a camera arranged on-site, and performing detail enhancement on the first image according to the third image which meets the first preset condition, where the first preset condition includes: a capture time interval between the third image and the first image being less than or equal to a preset time interval, and the third image being able to be utilized to enhance the first image partly or entirely (101).Type: GrantFiled: June 30, 2020Date of Patent: October 22, 2024Assignee: ZTE CORPORATIONInventor: Zhen Liang
-
Patent number: 11610409Abstract: Examples disclosed herein may involve (i) based on an analysis of 2D data captured by a vehicle while operating in a real-world environment during a window of time, generating a 2D track for at least one object detected in the environment comprising one or more 2D labels representative of the object, (ii) for the object detected in the environment: (a) using the 2D track to identify, within a 3D point cloud representative of the environment, 3D data points associated with the object, and (b) based on the 3D data points, generating a 3D track for the object that comprises one or more 3D labels representative of the object, and (iii) based on the 3D point cloud and the 3D track, generating a time-aggregated, 3D visualization of the environment in which the vehicle was operating during the window of time that includes at least one 3D label for the object.Type: GrantFiled: February 1, 2021Date of Patent: March 21, 2023Assignee: Woven Planet North America, Inc.Inventors: Rupsha Chaudhuri, Kumar Hemachandra Chellapilla, Tanner Cotant Christensen, Newton Ko Yue Der, Joan Devassy, Suneet Rajendra Shah
-
Patent number: 10430050Abstract: An apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: based on a received user selection of one or more of a plurality of displayed user selectable markers (1505a-f), the user selectable markers each corresponding to one or more of the plurality of time adjacent images in a sequence of images, causing an editing function associated with the one or more images that correspond to the selected marker.Type: GrantFiled: January 31, 2014Date of Patent: October 1, 2019Assignee: Nokia Technologies OyInventors: Dan Spjuth, Shahil Soni, David Fredh, Esa Kankaanpää, Johan Windmark, Ari Liusaari, Martin Karlsson
-
Patent number: 9936217Abstract: A method and encoder for video encoding a sequence of frames is provided. The method comprises: receiving a sequence of frames depicting a moving object, predicting a movement of the moving object in the sequence of frames between a first time point and a second time point; defining, on basis of the predicted movement of the moving object, a region of interest (ROI) in the frames which covers the moving object during its entire predicted movement between the first time point and the second time point; and encoding a first frame, corresponding to the first time point, in the ROI and one or more intermediate frames, corresponding to time points being intermediate to the first and the second time point, in at least a subset of the ROI using a common encoding quality pattern defining which encoding quality to use in which portion of the ROI.Type: GrantFiled: November 25, 2015Date of Patent: April 3, 2018Assignee: AXIS ABInventors: Jiandan Chen, Markus Skans, Willie Betschart, Mikael Pendse, Alexandre Martins
-
Patent number: 9703461Abstract: A method and apparatus for creating media content. The method comprises recording a video; while the video is being recorded, automatically analyzing the content of the video; and creating media content by editing the video, assisted by the results of the content-analysis. A user may not need to select in advance (that is, before the video is recorded) the type or format of media content to be created.Type: GrantFiled: August 14, 2014Date of Patent: July 11, 2017Assignee: NXP B.V.Inventor: Benoit Brieussel
-
Patent number: 9392174Abstract: Systems, methods, and non-transitory computer-readable media can capture media content including an original set of frames. A selection of a time-lapse amount can be received. A subset of frames from the original set of frames can be identified based on the time-lapse amount. An orientation-based image stabilization process can be applied to the subset of frames to produce a stabilized subset of frames. A stabilized time-lapse media content item can be provided based on the stabilized subset of frames.Type: GrantFiled: December 11, 2014Date of Patent: July 12, 2016Assignee: Facebook, Inc.Inventors: Thomas Dimson, Alexandre Karpenko
-
Patent number: 7663622Abstract: There are provided a unified framework based on extensible styles for 3D non-photorealistic rendering and a method of configuring the framework. The unified framework includes: 3D model data processing means for generating a scene graph by converting a 3D model input into 3D data and organizing the scene graph using vertexes, faces, and edges; face painting means for selecting a brusher to paint faces (interiors) of the 3D model using the scene graph; line drawing means for extracting line information from the 3D model using the scene graph and managing the extracted line information; style expressing means for generating a rendering style for the 3D model and storing the rendering style as a stroke, the rendering style being equally applied to a face-painting method and a line-drawing method; and rendering means for combining the stroke and the selected brusher to render the 3D model using both the face-painting method and the line-drawing method.Type: GrantFiled: December 7, 2006Date of Patent: February 16, 2010Assignee: Electronics and Telecommunications Research InstituteInventors: Sung Ye Kim, Ji Hyung Lee, Bo Youn Kim, Hee Jeong Kim, Bon Ki Koo