Patents by Inventor Michael Slutsky
Michael Slutsky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230401728Abstract: A system for occlusion reconstruction in surround views using temporal information is provided. The system includes an active camera device generating image data describing a first view of an operating environment and a computerized visual data controller. The controller includes programming to analyze the image data to generate a three-dimensional computerized representation of the operating environment, utilize the image data and the representation of the operating environment to synthesize a virtual camera view of the operating environment from a desired viewpoint, and identify an occlusion in the virtual camera view. The controller further includes programming to utilize historical iterations of the image data identify immobile objects within the operating environment and utilize the historical iterations to estimate filled information for the occlusion.Type: ApplicationFiled: June 8, 2022Publication date: December 14, 2023Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Yehonatan Mandel, Michael Slutsky
-
Patent number: 11798139Abstract: Systems and methods to perform noise-adaptive non-blind deblurring on an input image that includes blur and noise involve implementing a first neural network on the input image to obtain one or more parameters and performing regularized deconvolution to obtain a deblurred image from the input image. The regularized deconvolution uses the one or more parameters to control noise in the deblurred image. A method includes implementing a second neural network to remove artifacts from the deblurred image and provide an output image.Type: GrantFiled: November 17, 2020Date of Patent: October 24, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventor: Michael Slutsky
-
Patent number: 11770495Abstract: Systems and methods for generating a virtual view of a virtual camera based on an input image are described. A system for generating a virtual view of a virtual camera based on an input image can include a capturing device including a physical camera and a depth sensor. The system also includes a controller configured to determine an actual pose of the capturing device; determine a desired pose of the virtual camera for showing the virtual view; define an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera; and generate a virtual image depicting objects within the input image according to the desired pose of the virtual camera for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input image, and the desired pose of the virtual camera.Type: GrantFiled: August 13, 2021Date of Patent: September 26, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Michael Slutsky, Albert Shalumov
-
Patent number: 11748936Abstract: Systems and methods for generating a virtual view of a virtual camera based on an input scene are described. A capturing device typically includes a physical camera and a depth sensor and captures an input scene. A controller determines an actual pose of the capturing device and a desired pose of the virtual camera for showing the virtual view. The controller defines an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera. The controller generates an output image for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input scene, and the desired pose of the virtual camera.Type: GrantFiled: March 2, 2021Date of Patent: September 5, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Michael Slutsky, Albert Shalumov
-
Patent number: 11691566Abstract: Presented are intelligent vehicle systems with networked on-body vehicle cameras with camera-view augmentation capabilities, methods for making/using such systems, and vehicles equipped with such systems. A method for operating a motor vehicle includes a system controller receiving, from a network of vehicle-mounted cameras, camera image data containing a target object from a perspective of one or more cameras. The controller analyzes the camera image to identify characteristics of the target object and classify these characteristics to a corresponding model collection set associated with the type of target object. The controller then identifies a 3D object model assigned to the model collection set associated with the target object type. A new “virtual” image is generated by replacing the target object with the 3D object model positioned in a new orientation. The controller commands a resident vehicle system to execute a control operation using the new image.Type: GrantFiled: November 17, 2021Date of Patent: July 4, 2023Assignee: GM Global Technology Operations LLCInventors: Michael Slutsky, Albert Shalumov
-
Patent number: 11689820Abstract: A vehicle control system for automated driver-assistance includes a low-resolution color camera that captures a color image of a field of view at a first resolution. The vehicle control system further includes a high-resolution scanning camera that captures a plurality of grayscale images, each of the grayscale images at a second resolution. The second resolution is higher than the first resolution. The grayscale images encompass the same field of view as the color image. The vehicle control system further includes one or more processors that perform a method that includes overlaying the plurality of grayscale images over the color image. The method further includes correcting motion distortion of one or more objects in the grayscale images. The method further includes generating a high-resolution output image by assigning color values to one or more pixels in the grayscale images based on the color image using a trained neural network.Type: GrantFiled: January 6, 2021Date of Patent: June 27, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Tzvi Philipp, Michael Slutsky
-
Publication number: 20230196790Abstract: A visual perception system includes a scanning camera, color sensor with color filter array (CFA), and a classifier node. The camera captures full color pixel images of a target object, e.g., a traffic light, and processes the pixel images through a narrow band pass filter (BPF), such that the narrow BPF outputs monochromatic images of the target object. The color sensor and CFA receive the monochromatic images. The color sensor has at least three color channels each corresponding to different colors of spectral data in the monochromatic images. The classifier node uses a predetermined classification decision tree to classify constituent pixels of the monochromatic images into different color bins as a corresponding color of interest. The color of interest may be used to perform a control action, e.g., via an automated driver assist system (ADAS) control unit or an indicator device.Type: ApplicationFiled: December 21, 2021Publication date: June 22, 2023Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Tzvi Philipp, Eran Kishon, Michael Slutsky
-
Patent number: 11683458Abstract: Systems and methods for projecting a multi-faceted image onto a convex polyhedron based on an input image are described. A system can include a controller configured to determine a mapping between pixels within a wide-angle image and a multi-faceted image, and generate the multi-faceted image based on the mapping.Type: GrantFiled: July 29, 2021Date of Patent: June 20, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Michael Slutsky, Albert Shalumov
-
Publication number: 20230150429Abstract: Presented are intelligent vehicle systems with networked on-body vehicle cameras with camera-view augmentation capabilities, methods for making/using such systems, and vehicles equipped with such systems. A method for operating a motor vehicle includes a system controller receiving, from a network of vehicle-mounted cameras, camera image data containing a target object from a perspective of one or more cameras. The controller analyzes the camera image to identify characteristics of the target object and classify these characteristics to a corresponding model collection set associated with the type of target object. The controller then identifies a 3D object model assigned to the model collection set associated with the target object type. A new “virtual” image is generated by replacing the target object with the 3D object model positioned in a new orientation. The controller commands a resident vehicle system to execute a control operation using the new image.Type: ApplicationFiled: November 17, 2021Publication date: May 18, 2023Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Michael Slutsky, Albert Shalumov
-
Publication number: 20230050264Abstract: Systems and methods for generating a virtual view of a virtual camera based on an input image are described. A system for generating a virtual view of a virtual camera based on an input image can include a capturing device including a physical camera and a depth sensor. The system also includes a controller configured to determine an actual pose of the capturing device; determine a desired pose of the virtual camera for showing the virtual view; define an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera; and generate a virtual image depicting objects within the input image according to the desired pose of the virtual camera for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input image, and the desired pose of the virtual camera.Type: ApplicationFiled: August 13, 2021Publication date: February 16, 2023Inventors: Michael Slutsky, Albert Shalumov
-
Publication number: 20230031894Abstract: Systems and methods for projecting a multi-faceted image onto a convex polyhedron based on an input image are described. A system can include a controller configured to determine a mapping between pixels within a wide-angle image and a multi-faceted image, and generate the multi-faceted image based on the mapping.Type: ApplicationFiled: July 29, 2021Publication date: February 2, 2023Inventors: Michael Slutsky, Albert Shalumov
-
Patent number: 11532165Abstract: In various embodiments, methods and systems are provided for processing camera data from a camera system associated with a vehicle. In one embodiment, a method includes: storing a plurality of photorealistic scenes of an environment; training, by a processor, a machine learning model to produce a surround view approximating a ground truth surround view using the plurality of photorealistic scenes as training data; and processing, by a processor, the camera data from the camera system associated with the vehicle based on the trained machine learning model to produce a surround view of an environment of the vehicle.Type: GrantFiled: February 26, 2020Date of Patent: December 20, 2022Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Michael Slutsky, Daniel Kigli
-
Publication number: 20220292289Abstract: Methods and system for training a neural network for depth estimation in a vehicle. The methods and systems receive respective training image data from at least two cameras. Fields of view of adjacent cameras of the at least two cameras partially overlap. The respective training image data is processed through a neural network providing depth data and semantic segmentation data as outputs. The neural network is trained based on a loss function. The loss function combines a plurality of loss terms including at least a semantic segmentation loss term and a panoramic loss term. The panoramic loss term includes a similarity measure regarding overlapping image patches of the respective image data that each correspond to a region of overlapping fields of view of the adjacent cameras. The semantic segmentation loss term quantifies a difference between ground truth semantic segmentation data and the semantic segmentation data output from the neural network.Type: ApplicationFiled: March 11, 2021Publication date: September 15, 2022Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Albert Shalumov, Michael Slutsky
-
Publication number: 20220284221Abstract: Systems and methods for generating a virtual view of a scene captured by a physical camera are described. The physical camera captures an input image with multiple pixels. A desired pose of a virtual camera for showing the virtual view is set. The actual pose of the physical camera is determined, and an epipolar geometry between the actual pose of the physical camera and the desired pose of the virtual camera is defined. The input image and depth data of the pixels of the input image are resampled in epipolar coordinates. A controller performs disparity estimation of the pixels of the input image and a deep neural network, DNN, corrects disparity artifacts in the output image for the desired pose of the virtual camera. The complexity of correcting disparity artifacts in the output image by a DNN is reduced by using epipolar geometry.Type: ApplicationFiled: March 2, 2021Publication date: September 8, 2022Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Michael Slutsky, Albert Shalumov
-
Publication number: 20220284660Abstract: Systems and methods for generating a virtual view of a virtual camera based on an input scene are described. A capturing device typically includes a physical camera and a depth sensor and captures an input scene. A controller determines an actual pose of the capturing device and a desired pose of the virtual camera for showing the virtual view. The controller defines an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera. The controller generates an output image for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input scene, and the desired pose of the virtual camera.Type: ApplicationFiled: March 2, 2021Publication date: September 8, 2022Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Michael Slutsky, Albert Shalumov
-
Publication number: 20220217272Abstract: A vehicle control system for automated driver-assistance includes a low-resolution color camera that captures a color image of a field of view at a first resolution. The vehicle control system further includes a high-resolution scanning camera that captures a plurality of grayscale images, each of the grayscale images at a second resolution. The second resolution is higher than the first resolution. The grayscale images encompass the same field of view as the color image. The vehicle control system further includes one or more processors that perform a method that includes overlaying the plurality of grayscale images over the color image. The method further includes correcting motion distortion of one or more objects in the grayscale images. The method further includes generating a high-resolution output image by assigning color values to one or more pixels in the grayscale images based on the color image using a trained neural network.Type: ApplicationFiled: January 6, 2021Publication date: July 7, 2022Inventors: Tzvi Phillipp, Michael Slutsky
-
Patent number: 11372424Abstract: A vehicle, navigation system of the vehicle, and method for observing an environment of the vehicle. The navigation system includes a sensor and a processor. The sensor obtains a detection from an object located in an environment of the vehicle at a with-detection direction with respect to the sensor that includes the detection. The processor determines a no-detection direction that does not include the detection, assigns an empty space likelihood to an occupancy grid of the sensor along the no-detection direction, and generates a map of the environment using the occupancy grid.Type: GrantFiled: May 10, 2018Date of Patent: June 28, 2022Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Michael Slutsky, Daniel I. Dobkin
-
Patent number: 11354784Abstract: Methods and systems for training a non-blind deblurring module are disclosed. Unblurred test images and blurred test images are received, wherein each of the blurred test images is related to a corresponding one of the unblurred test images by a blur kernel term and a noise term. A regularized deconvolution sub-module and a convolutional neural network are jointly trained by adjusting a regularization parameter of a regularized deconvolution function and weights of a convolution neural network in order to minimize a cost function representative of a difference between each deblurred output image and a corresponding one of the unblurred test images.Type: GrantFiled: March 2, 2020Date of Patent: June 7, 2022Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventor: Michael Slutsky
-
Publication number: 20220156892Abstract: Systems and methods to perform noise-adaptive non-blind deblurring on an input image that includes blur and noise involve implementing a first neural network on the input image to obtain one or more parameters and performing regularized deconvolution to obtain a deblurred image from the input image. The regularized deconvolution uses the one or more parameters to control noise in the deblurred image. A method includes implementing a second neural network to remove artifacts from the deblurred image and provide an output image.Type: ApplicationFiled: November 17, 2020Publication date: May 19, 2022Inventor: Michael Slutsky
-
Patent number: 11168985Abstract: A vehicle pose determining system and method for accurately estimating the pose of a vehicle (i.e., the location and/or orientation of a vehicle). The system and method use a form of sensor fusion, where output from vehicle dynamics sensors (e.g., accelerometers, gyroscopes, encoders, etc.) is used with output from vehicle radar sensors to improve the accuracy of the vehicle pose data. Uncorrected vehicle pose data derived from dynamics sensor data is compensated with correction data that is derived from occupancy grids that are based on radar sensor data. The occupancy grids, which are 2D or 3D mathematical objects that are somewhat like radar-based maps, must correspond to the same geographic location. The system and method use mathematical techniques (e.g., cost functions) to rotate and shift multiple occupancy grids until a best fit solution is determined, and the best fit solution is then used to derive the correction data that, in turn, improves the accuracy of the vehicle pose data.Type: GrantFiled: April 1, 2019Date of Patent: November 9, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Michael Slutsky, Daniel I. Dobkin