Patents by Inventor Aziz Umit Batur

Aziz Umit Batur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10705534
    Abstract: Improved processing of sensor data (e.g., LiDAR data) can be used to distinguish between free space and objects/hazards. Autonomous vehicles can use such information for performing autonomous driving and/or parking operations. LiDAR data can include a plurality of range measurements (e.g., forming a 3D point cloud). Each of the range measurements can correspond to a respective LiDAR channel and azimuth angle. The processing of LiDAR data can include identifying one or more of the plurality of range measurements as ground points or non-ground points based on one or more point criteria. The one or more point criteria can include a ground projection criterion and an angular variation criterion.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: July 7, 2020
    Assignee: FARADAY&FUTURE INC.
    Inventors: Dae Jin Kim, Aziz Umit Batur
  • Patent number: 10643091
    Abstract: In some embodiments, a computer-readable medium stores executable code, which, when executed by a processor, causes the processor to: capture an image of a finder pattern using a camera; locate a predetermined point within the finder pattern; use the predetermined point to identify multiple boundary points on a perimeter associated with the finder pattern; identify fitted boundary lines based on the multiple boundary points; and locate feature points using the fitted boundary lines.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: May 5, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Do-Kyoung Kwon, Aziz Umit Batur, Vikram Appia
  • Publication number: 20190370589
    Abstract: In some embodiments, a computer-readable medium stores executable code, which, when executed by a processor, causes the processor to: capture an image of a finder pattern using a camera; locate a predetermined point within the finder pattern; use the predetermined point to identify multiple boundary points on a perimeter associated with the finder pattern; identify fitted boundary lines based on the multiple boundary points; and locate feature points using the fitted boundary lines.
    Type: Application
    Filed: August 16, 2019
    Publication date: December 5, 2019
    Inventors: Do-Kyoung KWON, Aziz Umit BATUR, Vikram APPIA
  • Publication number: 20190339706
    Abstract: The present invention is related to systems and methods for detecting an occluded object based on the shadow of the occluded object. In some examples, a vehicle of the present invention can capture one or more images while operating in an autonomous driving mode, and detecting shadow items within the captured image. In response to detecting a shadow item moving towards the direction of vehicle travel, the vehicle can reduce its speed to avoid a collision, should an occluded object enter the road. The shadow can be detected using image segmentation or a classifier trained using convolutional neural networks or another suitable algorithm, for example.
    Type: Application
    Filed: June 12, 2018
    Publication date: November 7, 2019
    Inventor: Aziz Umit Batur
  • Patent number: 10469828
    Abstract: Disclosed examples include three-dimensional imaging systems and methods to reconstruct a three-dimensional scene from first and second image data sets obtained from a single camera at first and second times, including computing feature point correspondences between the image data sets, computing an essential matrix that characterizes relative positions of the camera at the first and second times, computing pairs of first and second projective transforms that individually correspond to regions of interest that exclude an epipole of the captured scene, as well as computing first and second rectified image data sets in which the feature point correspondences are aligned on a spatial axis by respectively applying the corresponding first and second projective transforms to corresponding portions of the first and second image data sets, and computing disparity values of a stereo disparity map according to the rectified image data sets to construct.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: November 5, 2019
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Martin Fritz Mueller, Aziz Umit Batur
  • Publication number: 20190324471
    Abstract: Improved processing of sensor data (e.g., LiDAR data) can be used to distinguish between free space and objects/hazards. Autonomous vehicles can use such information for performing autonomous driving and/or parking operations. LiDAR data can include a plurality of range measurements (e.g., forming a 3D point cloud). Each of the range measurements can correspond to a respective LiDAR channel and azimuth angle. The processing of LiDAR data can include identifying one or more of the plurality of range measurements as ground points or non-ground points based on one or more point criteria. The one or more point criteria can include a ground projection criterion and an angular variation criterion.
    Type: Application
    Filed: April 19, 2018
    Publication date: October 24, 2019
    Inventors: Dae Jin KIM, Aziz Umit BATUR
  • Publication number: 20190323844
    Abstract: A system for use in a vehicle, the system comprising one or more sensors, one or more processors operatively coupled to the one or more sensors, and a memory including instructions, which when executed by the one or more processors, cause the one or more processors to perform a method. The method comprising capturing a current point cloud with the one or more sensors, determining an estimated location and an estimated heading of the vehicle, selecting one or more point clouds based on the estimated location and heading of the vehicle, simplifying the current point cloud and the one or more point clouds, correlating the current point cloud to the one or more point clouds, and determining an updated estimate of the location of the vehicle based on correlation between the current point cloud and the one or more point clouds.
    Type: Application
    Filed: April 18, 2018
    Publication date: October 24, 2019
    Inventors: Daedeepya YENDLURI, Sowmya GADE, Anupama KURUVILLA, Aziz Umit BATUR
  • Publication number: 20190324148
    Abstract: Improved processing of sensor data (e.g., LiDAR data) can be used to distinguish between free space and objects/hazards. Autonomous vehicles can use such information for performing autonomous driving and/or parking operations. LiDAR data can include a plurality of range measurements (e.g., forming a 3D point cloud). Each of the range measurements can correspond to a respective LiDAR channel and azimuth angle. The processing of LiDAR data can include identifying one or more paths as candidate ground paths based on one or more path criteria. The processing of LiDAR data can also include identifying one or more of the plurality of range measurements as ground points or non-ground points based on the one or more paths identified as candidate ground paths and based on one or more point criteria.
    Type: Application
    Filed: April 19, 2018
    Publication date: October 24, 2019
    Inventors: Dae Jin KIM, Aziz Umit BATUR
  • Patent number: 10438081
    Abstract: In some embodiments, a computer-readable medium stores executable code, which, when executed by a processor, causes the processor to: capture an image of a finder pattern using a camera; locate a predetermined point within the finder pattern; use the predetermined point to identify multiple boundary points on a perimeter associated with the finder pattern; identify fitted boundary lines based on the multiple boundary points; and locate feature points using the fitted boundary lines.
    Type: Grant
    Filed: October 14, 2016
    Date of Patent: October 8, 2019
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Do-Kyoung Kwon, Aziz Umit Batur, Vikram Appia
  • Publication number: 20190180719
    Abstract: A method of generating a high dynamic range (HDR) image is provided that includes capturing a long exposure image and a short exposure image of a scene, computing a merging weight for each pixel location of the long exposure image based on a pixel value of the pixel location and a saturation threshold, and computing a pixel value for each pixel location of the HDR image as a weighted sum of corresponding pixel values in the long exposure image and the short exposure image, wherein a weight applied to a pixel value of the pixel location of the short exposure image and a weight applied to a pixel value of the pixel location in the pixel long exposure image are determined based on the merging weight computed for the pixel location and responsive to motion in a scene of the long exposure image and the short exposure image.
    Type: Application
    Filed: February 12, 2019
    Publication date: June 13, 2019
    Inventors: Rajesh Narasimha, Aziz Umit Batur
  • Publication number: 20190111842
    Abstract: A vehicle control system for moving a vehicle to a target location is disclosed. According to examples of the disclosure, a camera captures one or more images of a known object corresponding to the target location. An on-board computer having stored thereon information about the known object can process the one or more images to determine vehicle location with respect to the known object. The system can use the vehicle's determined location and a feedback controller to move the vehicle to the target location.
    Type: Application
    Filed: March 30, 2017
    Publication date: April 18, 2019
    Inventors: Aziz Umit Batur, Oliver Max Jeromin, Paul Alan Theodosis, Michael Venegas
  • Patent number: 10255888
    Abstract: A method of generating a high dynamic range (HDR) image is provided that includes capturing a long exposure image and a short exposure image of a scene, computing a merging weight for each pixel location of the long exposure image based on a pixel value of the pixel location and a saturation threshold, and computing a pixel value for each pixel location of the HDR image as a weighted sum of corresponding pixel values in the long exposure image and the short exposure image, wherein a weight applied to a pixel value of the pixel location of the short exposure image and a weight applied to a pixel value of the pixel location in the pixel long exposure image are determined based on the merging weight computed for the pixel location and responsive to motion in a scene of the long exposure image and the short exposure image.
    Type: Grant
    Filed: December 5, 2013
    Date of Patent: April 9, 2019
    Assignee: Texas Instruments Incorporated
    Inventors: Rajesh Narasimha, Aziz Umit Batur
  • Publication number: 20180299900
    Abstract: A method for autonomously controlling a vehicle is disclosed. In some examples, a vehicle can maneuver out of a parking space in an autonomous and unmanned operation. While parking, the vehicle can capture first one or more images of its surroundings and store the images in a memory included in the vehicle. Upon starting up, the vehicle can capture second one or more images of its surroundings and compare them to the first one or more images to determine if there is an object, person, or animal proximate to the vehicle, for example. In some examples, in accordance with a determination that there is no object, person, or vehicle present that was not present during parking, the vehicle can autonomously move from the parking space with or without a user present in the vehicle.
    Type: Application
    Filed: May 31, 2017
    Publication date: October 18, 2018
    Inventors: Hong S. Bae, Aziz Umit Batur, Evan Roger Fischer, Oliver Max Jeromin
  • Publication number: 20180292832
    Abstract: A system that performs a method is disclosed. The system receives information about a map, which includes information about one or more zones in the map. While navigating a vehicle along a driving path within the map, the system receives information about the location of the vehicle in the map. The system estimates an error bounds of the location of the vehicle, and determines in which of the one or more zones in the map the error bounds is located within. In response to the determination: in accordance with a determination that the error bounds is located within a first zone in the map, the system causes the vehicle to perform a driving operation. In accordance with a determination that the error bounds is located within a second zone of the one or more zones in the map, the system causes the vehicle to perform a different driving operation.
    Type: Application
    Filed: May 31, 2017
    Publication date: October 11, 2018
    Inventors: Hong S. Bae, Aziz Umit Batur, Evan Roger Fischer, Oliver Max Jeromin
  • Patent number: 10025317
    Abstract: Camera-based autonomous parking is disclosed. An autonomous parking procedure can include detecting parking lines in images captured by a camera on a vehicle. The vehicle can be localized with respect to the parking lines based on location data for the vehicle from a GPS receiver and a location determination for the vehicle based on detected ends of the parking lines. The vehicle can further determine an occupancy state of one or more parking spaces formed by the two or more parking lines using a range sensor on the vehicle and select an empty space. A region of interest including the selected space can be identified and one or more parking lines of the selected space can be detected in an image of the region of interest. The vehicle can autonomously move to reduce errors between the location of the vehicle and the final parking position within the selected space.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: July 17, 2018
    Assignee: FARADAY&FUTURE INC.
    Inventors: Aziz Umit Batur, Oliver Max Jeromin, Sowmya Gade, Paul Alan Theodosis, Michael Venegas
  • Publication number: 20180186283
    Abstract: A system that performs a method is disclosed. The system detects one or more characteristics about a vehicle via a first set of one or more sensors and determines one or more characteristics about the vehicle's surroundings via a second set of one or more sensors. The system also detects a vehicle failure via the first set of one or more sensors. In response to detecting the vehicle failure via the first one or more sensors, the system selects one or more road hazard indicators using the one or more characteristics about the vehicle's surroundings. After selecting the one or more road hazard indicators using the one or more characteristics about the vehicle's surroundings, the system deploys the one or more road hazard indicators.
    Type: Application
    Filed: September 28, 2017
    Publication date: July 5, 2018
    Inventors: Evan Roger Fischer, Hong S. Bae, Aziz Umit Batur, Kwang Keun J. Shin
  • Publication number: 20180095474
    Abstract: Camera-based autonomous parking is disclosed. An autonomous parking procedure can include detecting parking lines in images captured by a camera on a vehicle. The vehicle can be localized with respect to the parking lines based on location data for the vehicle from a GPS receiver and a location determination for the vehicle based on detected ends of the parking lines. The vehicle can further determine an occupancy state of one or more parking spaces formed by the two or more parking lines using a range sensor on the vehicle and select an empty space. A region of interest including the selected space can be identified and one or more parking lines of the selected space can be detected in an image of the region of interest. The vehicle can autonomously move to reduce errors between the location of the vehicle and the final parking position within the selected space.
    Type: Application
    Filed: June 12, 2017
    Publication date: April 5, 2018
    Inventors: Aziz Umit BATUR, Oliver Max JEROMIN, Sowmya GADE, Paul Alan THEODOSIS, Michael VENEGAS
  • Patent number: 9892493
    Abstract: A method, apparatus and a system multi-camera image processing method. The method includes performing geometric alignment to produce a geometric output by estimating fish eye distortion correction parameters, performing initial perspective correction on related frame, running corner detection in the overlapping areas, locating the stronger corner, calculating BRIEF descriptors for features and match feature point from two cameras using BRIEF scores, performing checks and rejecting wrong feature matches, finding perspective matrices to minimize distance between matched features; and creating a geometric lookup table.
    Type: Grant
    Filed: April 20, 2015
    Date of Patent: February 13, 2018
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Ibrahim Ethem Pekkucuksen, Aziz Umit Batur
  • Patent number: 9881357
    Abstract: A method of transforming an N-bit raw wide dynamic range (WDR) Bayer image to a K-bit raw red-green-blue (RGB) image wherein N>K is provided that includes converting the N-bit raw WDR Bayer image to an N-bit raw RGB image, computing a luminance image from the N-bit raw RGB image, computing a pixel gain value for each luminance pixel in the luminance image to generate a gain map, applying a hierarchical noise filter to the gain map to generate a filtered gain map, applying the filtered gain map to the N-bit raw RGB image to generated a gain mapped N-bit RGB image, and downshifting the gain mapped N-bit RGB image by (N?K) to generate the K-bit RGB image.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: January 30, 2018
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Buyue Zhang, Aziz Umit Batur
  • Patent number: 9774833
    Abstract: A method of automatically focusing a projector in a projection system is provided that includes projecting, by the projector, a binary pattern on a projection surface, capturing an image of the projected binary pattern by a camera synchronized with the projector, computing a depth map from the captured image, and adjusting focus of the projector based on the computed depth map.
    Type: Grant
    Filed: June 28, 2014
    Date of Patent: September 26, 2017
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Vikram VijayanBabu Appia, Pedro Gelabert, Aziz Umit Batur