Patents by Inventor Vladimir Zlokolica

Vladimir Zlokolica has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230033352
    Abstract: The present disclosure generally pertains to a time-of-flight object detection circuitry configured to: obtain reflectivity data being indicative of reflectivity of a scene; determine the reflectivity of the scene; determine a region of an object in the scene based on the determined reflectivity; and generate time-of-flight image data based on the determined region of the object for detecting the object.
    Type: Application
    Filed: December 16, 2020
    Publication date: February 2, 2023
    Applicant: Sony Semiconductor Solutions Corporation
    Inventors: Aliaksandr KAMOVICH, Vladimir ZLOKOLICA
  • Publication number: 20230003894
    Abstract: The present disclosure generally pertains to a time-of-flight imaging circuitry configured to: obtain first image data from an image sensor, the first image data being indicative of a scene, which is illuminated with spotted light; determine a first image feature in the first image data; obtain second image data from the image sensor, the second image data being indicative of the scene; determine second image feature in the second image data; estimate a motion of the second image feature with respect to the first image feature; and merge the first and the second image data based on the estimated motion.
    Type: Application
    Filed: December 15, 2020
    Publication date: January 5, 2023
    Applicant: Sony Semiconductor Solutions Corporation
    Inventors: Vladimir ZLOKOLICA, Alex KAMOVITCH
  • Patent number: 11302062
    Abstract: The invention relates to a method for generating at least one merged perspective viewing image (24), which shows a motor vehicle (1) and its environmental region (4) from a dynamically variable perspective (P1, P2, P3) of a dynamic virtual camera (12) and which is determined based on raw images (25) of at least two cameras (5a, 5b, 5c, 5d) and based on a perspective model (17) of the motor vehicle (1), comprising the steps of: a) determining whether the merged perspective viewing image (24) comprises at least one disturbing signal afflicted image area, and if so, identifying the at least one disturbing signal afflicted image area; b) (S63) determining a severity of disturbing signals (27) within the at least one disturbing signal afflicted image area; c) (S61) determining a significance of the disturbing signals (27) in dependence on the perspective (P1, P2, P3) of the virtual camera (12); d) (S62) determining a degree of coverage of the disturbing signal afflicted image area by the model (17) of the motor ve
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: April 12, 2022
    Assignee: Connaught Electronics Ltd.
    Inventors: Huanqing Guo, Brian Michael Thomas Deegan, Vladimir Zlokolica
  • Publication number: 20200396394
    Abstract: The invention relates to a method for generating an output image with a predefined target view showing a motor vehicle (1) and an environmental region (4) of the motor vehicle (1) based on at least partially overlapping raw images (RC1, RC2, RC3, RC4) captured by at least two vehicle-side cameras (5a, 5b, 5c, 5d), comprising the steps of: specifying respective camera-specific pixel density maps (PDM1a, PDM1b, PDM2a, PDM2b), which each describe an image-region dependent distribution of a number of pixels of the raw image (R1 to R4) captured by the associated camera (5a to 5d) contributing for the generation of the output image, spatially adaptive filtering of the raw images (RC1 to RC4) based on the pixel density map (PDM1a to PDM2b) specific to the associated camera (5a to 5d), which indicates an image region-dependent extent of the filtering, identifying mutually corresponding image areas (B1a, B1b, B2a, B2b, B3a, B3b, B4a, B4b) in the at least partially overlapping raw images (RC1 to RC4) of the at least
    Type: Application
    Filed: October 10, 2018
    Publication date: December 17, 2020
    Applicant: Connaught Electronics Ltd.
    Inventors: Vladimir Zlokolica, Mark Patrick Griffin, Brian Michael Thomas Deegan, Barry Dever, John Maher
  • Publication number: 20200151942
    Abstract: The invention relates to a method for generating at least one merged perspective viewing image (24), which shows a motor vehicle (1) and its environmental region (4) from a dynamically variable perspective (P1, P2, P3) of a dynamic virtual camera (12) and which is determined based on raw images (25) of at least two cameras (5a, 5b, 5c, 5d) and based on a perspective model (17) of the motor vehicle (1), comprising the steps of: a) determining whether the merged perspective viewing image (24) comprises at least one disturbing signal afflicted image area, and if so, identifying the at least one disturbing signal afflicted image area; b) (S63) determining a severity of disturbing signals (27) within the at least one disturbing signal afflicted image area; c) (S61) determining a significance of the disturbing signals (27) in dependence on the perspective (P1, P2, P3) of the virtual camera (12); d) (S62) determining a degree of coverage of the disturbing signal afflicted image area by the model (17) of the motor ve
    Type: Application
    Filed: June 25, 2018
    Publication date: May 14, 2020
    Applicant: Connaught Electronics Ltd.
    Inventors: Huanqing Guo, Brian Michael Thomas Deegan, Vladimir Zlokolica