Patents by Inventor Florin Cutu
Florin Cutu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11846940Abstract: A drone is deployed from a vehicle, e.g., an autonomous or semi-autonomous vehicle, to assist in vehicle control, e.g. in situations in which the vehicle's embedded sensors may not provide sufficient information to perform a desired operation safely, e.g. backing up, parking in a tight environment, traversing a very narrow road, navigating a sharp corner, or bypassing an obstruction, etc. The deployed drone includes sensors, e.g. cameras, radars, LIDARs, etc, which capture sensor data from a position offset from the vehicle. Captured sensor data is communicated from the drone to a vehicle control system in the vehicle and/or to a remote control system, e.g., including an operator who can make decisions. Based on the captured sensor data, which supplements sensor data collected by the vehicle's embedded sensors, vehicle movement is controlled.Type: GrantFiled: August 31, 2020Date of Patent: December 19, 2023Assignee: Deere & CompanyInventor: Florin Cutu
-
Patent number: 11575843Abstract: Image sensor modules include primary high-resolution imagers and secondary imagers. For example, an image sensor module may include a semiconductor chip including photosensitive regions defining, respectively, a primary camera and a secondary camera. The image sensor module may include an optical assembly that does not substantially obstruct the field-of-view of the secondary camera. Some modules include multiple secondary cameras that have a field-of-view at least as large as the field-of-view of the primary camera. Various features are described to facilitate acquisition of signals that can be used to calculate depth information.Type: GrantFiled: September 4, 2020Date of Patent: February 7, 2023Assignee: ams Sensors Singapore Pte. Ltd.Inventors: Jukka Alasirnio, Tobias Senn, Ohad Meitav, Moshe Doron, Alireza Yasan, Mario Cesana, Florin Cutu, Hartmut Rudmann, Markus Rossi, Peter Roentgen, Daniel Perez Calero, Bassam Hallal, Jens Geiger
-
Publication number: 20210064020Abstract: A drone is deployed from a vehicle, e.g., an autonomous or semi-autonomous vehicle, to assist in vehicle control, e.g. in situations in which the vehicle's embedded sensors may not provide sufficient information to perform a desired operation safely, e.g. backing up, parking in a tight environment, traversing a very narrow road, navigating a sharp corner, or bypassing an obstruction, etc. The deployed drone includes sensors, e.g. cameras, radars, LIDARs, etc, which capture sensor data from a position offset from the vehicle. Captured sensor data is communicated from the drone to a vehicle control system in the vehicle and/or to a remote control system, e.g., including an operator who can make decisions. Based on the captured sensor data, which supplements sensor data collected by the vehicle's embedded sensors, vehicle movement is controlled.Type: ApplicationFiled: August 31, 2020Publication date: March 4, 2021Inventor: Florin Cutu
-
Publication number: 20210014429Abstract: Image sensor modules include primary high-resolution imagers and secondary imagers. For example, an image sensor module may include a semiconductor chip including photosensitive regions defining, respectively, a primary camera and a secondary camera. The image sensor module may include an optical assembly that does not substantially obstruct the field-of-view of the secondary camera. Some modules include multiple secondary cameras that have a field-of-view at least as large as the field-of-view of the primary camera. Various features are described to facilitate acquisition of signals that can be used to calculate depth information.Type: ApplicationFiled: September 4, 2020Publication date: January 14, 2021Inventors: Jukka Alasirnio, Tobias Senn, Ohad Meitav, Moshe Doron, Alireza Yasan, Mario Cesana, Florin Cutu, Hartmut Rudmann, Markus Rossi, Peter Roentgen, Daniel Perez Calero, Bassam Hallal, Jens Geiger
-
Publication number: 20200320725Abstract: An optoelectronic system for collecting three-dimensional data of a scene over a minimum and maximum distance includes illumination modules, each of which is operable to generate a respective light pattern having a respective period. An imaging module is operable to collect a scene-reflected portion of each of the light patterns and is further operable to convert each collected portion into a respective signal set. Each scene-reflected portion is characterized by a respective plurality of ambiguity values and a minimum disparity value, and wherein each signal set corresponds to a respective one of the light patterns. The system includes a processor, and a non-transitory computer-readable medium comprising instructions stored thereon that, when executed by the processor, cause the processor to perform operations for determining a plurality of candidate three-dimensional data sets. Each candidate three-dimensional data set is determined from a respective one of the signal sets.Type: ApplicationFiled: September 25, 2018Publication date: October 8, 2020Inventors: Markus Rossi, Florin Cutu
-
Patent number: 10771714Abstract: Image sensor modules include primary high-resolution imagers and secondary imagers. For example, an image sensor module may include a semiconductor chip including photosensitive regions defining, respectively, a primary camera and a secondary camera. The image sensor module may include an optical assembly that does not substantially obstruct the field-of-view of the secondary camera. Some modules include multiple secondary cameras that have a field-of-view at least as large as the field-of-view of the primary camera. Various features are described to facilitate acquisition of signals that can be used to calculate depth information.Type: GrantFiled: February 23, 2015Date of Patent: September 8, 2020Assignee: ams Sensors Singapore Pte. Ltd.Inventors: Jukka Alasirniö, Tobias Senn, Ohad Meitav, Moshe Doron, Alireza Yasan, Mario Cesana, Florin Cutu, Hartmut Rudmann, Markus Rossi, Peter Roentgen, Daniel Perez Calero, Bassam Hallal, Jens Geiger
-
Patent number: 10699476Abstract: Presenting a merged, fused three-dimensional point cloud includes acquiring multiple sets of images of a scene from different vantage points, each set of images including respective stereo matched images and a color image. For each respective set of images, a disparity map based on the plurality of stereo images is obtained, data from the color image is fused onto the disparity map so as to generate a fused disparity map, and a three-dimensional fused point cloud is created from the fused disparity map. The respective three-dimensional fused point clouds is merged together so as to obtain a merged, fused three-dimensional point cloud. The techniques can be advantageous even under the constraints of sparseness and low-depth resolution, and are suitable, in some cases, for real-time or near real-time applications in which computing time needs to be reduced.Type: GrantFiled: August 4, 2016Date of Patent: June 30, 2020Assignee: ams Sensors Singapore Pte. Ltd.Inventors: Chi Zhang, Xin Liu, Florin Cutu
-
Patent number: 10672137Abstract: This disclosure describes systems and techniques for generating a disparity map having reduced over-smoothing. To achieve the reduction in over-smoothing that might otherwise occur, a third image is captured in addition to stereo reference and search images. Information from the third image, which may be of the same or different type as the stereo images (e.g., RGB, grey-scale, infra-red (IR)) and which may have the same or different resolution as the stereo images, can help better define the object edges.Type: GrantFiled: August 19, 2016Date of Patent: June 2, 2020Assignee: ams Sensors Singapore Pte. Ltd.Inventor: Florin Cutu
-
Patent number: 10663691Abstract: The present disclosure describes imaging techniques and devices having improved autofocus capabilities. The imaging techniques can include actively illuminating a scene and determining distances over the entire scene and so that a respective distance to each object or point in the scene can be determined. Thus, distances to all objects in a scene (within a particular range) at any given instant can be stored. A preview of the image can be displayed so as to allow a user to select a region of the scene of interest. In response to the user's selection, the imager's optical assembly can be adjusted automatically, for example, to a position that corresponds to optimal image capture of objects at the particular distance of the selected region of the scene.Type: GrantFiled: March 6, 2019Date of Patent: May 26, 2020Assignee: ams Sensors Singapore Pte. Ltd.Inventors: Christian Tang-Jespersen, Michael Kiy, Miguel Bruno Vaello Paños, Florin Cutu, James Patrick Long, Hartmut Rudmann
-
Patent number: 10510149Abstract: Techniques are described for generating a distance map (e.g., a map of disparity, depth or other distance values) for image elements (e.g., pixels) of an image capture device. The distance map is generated based on an initial distance map (obtained, e.g., using a block or code matching algorithm) and a segmentation map (obtained using a segmentation algorithm). In some instances, the resulting distance map can be less sparse than the initial distance map, can contain more accurate distance values, and can be sufficiently fast for real-time or near real-time applications. The resulting distance map can be converted, for example, to a color-coded distance map of a scene that is presented on a display device.Type: GrantFiled: July 8, 2016Date of Patent: December 17, 2019Assignee: AMS SENSORS SINGAPORE PTE. LTDInventors: Florin Cutu, Alireza Yasan, Xin Liu
-
Patent number: 10481740Abstract: The present disclosure describes projecting a structured light pattern onto a surface and detecting and responding to interactions with the same. A method includes-acquiring an image based on light reflected from a vicinity of the projection surface, and identifying regions of the acquired image that correspond to a feature that is within a specified distance of the projection surface by identifying regions of the acquired image for which intensity data differs relative to other regions of the acquired image and which fit a specified homographic relationship with respect to corresponding regions of a reference image.Type: GrantFiled: July 27, 2017Date of Patent: November 19, 2019Assignee: AMS SENSORS SINGAPORE PTE. LTD.Inventors: Florin Cutu, Chi Zhang
-
Patent number: 10474297Abstract: The present disclosure describes projecting a structured light pattern projected onto a surface and detecting and responding to interactions with the same. The techniques described here can, in some cases, facilitate recognizing that an object such as a user's hand is adjacent the plane of a projection surface and can distinguish the object from the projection surface itself. Movement of the object then can be interpreted, for example, as a specified type of gesture that can trigger a specified type of operation to occur.Type: GrantFiled: July 20, 2017Date of Patent: November 12, 2019Assignee: AMS SENSORS SINGAPORE PTE. LTD.Inventors: Chi Zhang, Florin Cutu
-
Publication number: 20190271828Abstract: The present disclosure describes imaging techniques and devices having improved autofocus capabilities. The imaging techniques can include actively illuminating a scene and determining distances over the entire scene and so that a respective distance to each object or point in the scene can be determined. Thus, distances to all objects in a scene (within a particular range) at any given instant can be stored. A preview of the image can be displayed so as to allow a user to select a region of the scene of interest. In response to the user's selection, the imager's optical assembly can be adjusted automatically, for example, to a position that corresponds to optimal image capture of objects at the particular distance of the selected region of the scene.Type: ApplicationFiled: March 6, 2019Publication date: September 5, 2019Applicant: ams Sensors Singapore Pte. Ltd.Inventors: Christian Tang-Jespersen, Michael Kiy, Miguel Bruno Vaello Paños, Florin Cutu, James Patrick Long, Hartmut Rudmann
-
Patent number: 10261287Abstract: The present disclosure describes imaging techniques and devices having improved autofocus capabilities. The imaging techniques can include actively illuminating a scene and determining distances over the entire scene and so that a respective distance to each object or point in the scene can be determined. Thus, distances to all objects in a scene (within a particular range) at any given instant can be stored. A preview of the image can be displayed so as to allow a user to select a region of the scene of interest. In response to the user's selection, the imager's optical assembly can be adjusted automatically, for example, to a position that corresponds to optimal image capture of objects at the particular distance of the selected region of the scene.Type: GrantFiled: September 13, 2016Date of Patent: April 16, 2019Assignee: ams Sensors Singapore Pte. Ltd.Inventors: Christian Tang-Jespersen, Michael Kiy, Miguel Bruno Vaello Paños, Florin Cutu, James Patrick Long, Hartmut Rudmann
-
Publication number: 20190012789Abstract: First and second stereo images are acquired. The first image is partitioned into multiple segments, wherein each segment consists of image elements that share one or more characteristics in common. A segmentation map is generated in which each of the image elements is associated with a corresponding one of the segments to which it belongs. A respective disparity value is determined for each of the segments with respect to a corresponding portion of the second image, and the disparity value determined for each particular segment is assigned to at least one image element that belongs to that segment. A disparity map indicative of the assigned disparity values can then be generated. Generating the disparity map in this manner can, in some instance, help reduce edge and/or feature thickening.Type: ApplicationFiled: July 13, 2016Publication date: January 10, 2019Inventor: Florin Cutu
-
Patent number: 10147167Abstract: Generating a super-resolved reconstructed image includes acquiring a multitude of monochromatic images of a scene and extracting high-frequency band luma components from the acquired images. A high-resolution luma image is generated using the high-frequency components and motion data for the acquired images. The high-resolution luma image is combined with an up-sampled color image, generated from the acquired images, to generate a super-resolved reconstructed color image of the scene.Type: GrantFiled: November 14, 2016Date of Patent: December 4, 2018Assignee: Heptagon Micro Optics Pte. Ltd.Inventors: Florin Cutu, James Eilertsen
-
Publication number: 20180252894Abstract: The present disclosure describes imaging techniques and devices having improved autofocus capabilities. The imaging techniques can include actively illuminating a scene and determining distances over the entire scene and so that a respective distance to each object or point in the scene can be determined. Thus, distances to all objects in a scene (within a particular range) at any given instant can be stored. A preview of the image can be displayed so as to allow a user to select a region of the scene of interest. In response to the user's selection, the imager's optical assembly can be adjusted automatically, for example, to a position that corresponds to optimal image capture of objects at the particular distance of the selected region of the scene.Type: ApplicationFiled: September 13, 2016Publication date: September 6, 2018Inventors: Christian Tang-Jespersen, Michael Kiy, Miguel Bruno Vaello Paños, Florin Cutu, James Patrick Long, Hartmut Rudmann
-
Publication number: 20180240247Abstract: This disclosure describes systems and techniques for generating a disparity map having reduced over-smoothing. To achieve the reduction in over-smoothing that might otherwise occur, a third image is captured in addition to stereo reference and search images. Information from the third image, which may be of the same or different type as the stereo images (e.g., RGB, grey-scale, infra-red (IR)) and which may have the same or different resolution as the stereo images, can help better define the object edges.Type: ApplicationFiled: August 19, 2016Publication date: August 23, 2018Inventor: Florin Cutu
-
Publication number: 20180225866Abstract: Presenting a merged, fused three-dimensional point cloud includes acquiring multiple sets of images of a scene from different vantage points, each set of images including respective stereo matched images and a color image. For each respective set of images, a disparity map based on the plurality of stereo images is obtained, data from the color image is fused onto the disparity map so as to generate a fused disparity map, and a three-dimensional fused point cloud is created from the fused disparity map. The respective three-dimensional fused point clouds is merged together so as to obtain a merged, fused three-dimensional point cloud. The techniques can be advantageous even under the constraints of sparseness and low-depth resolution, and are suitable, in some cases, for real-time or near real-time applications in which computing time needs to be reduced.Type: ApplicationFiled: August 4, 2016Publication date: August 9, 2018Inventors: Chi Zhang, Xin Liu, Florin Cutu
-
Publication number: 20180213201Abstract: Providing a disparity map includes acquiring first and second stereo images, binarizing the first stereo image to obtain a binarized image, and applying a block matching technique to the first and second stereo images to obtain an initial disparity map in which individual image elements are assigned a respective initial disparity value. For each respective image element, an updated disparity value that represents a product of the initial disparity value assigned to the image element and a value associated with the image element in the binarized image is obtained. An updated disparity map can be generated and represents the updated disparity values of the image elements.Type: ApplicationFiled: July 13, 2016Publication date: July 26, 2018Inventors: Chi Zhang, Alireza Yasan, Xin Liu, Florin Cutu, Dmitry Ryuma