Patents by Inventor Stephen Horton
Stephen Horton has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240029442Abstract: Vehicle perception techniques include obtaining a training dataset represented by N training histograms, in an image feature space, corresponding to N training images, K-means clustering the N training histograms to determine K clusters with respective K respective cluster centers, wherein K and N are integers greater than or equal to one and K is less than or equal to N, comparing the N training histograms to their respective K cluster centers to determine maximum in-class distances for each of K clusters, applying a deep neural network (DNN) to input images of the set of inputs to output detected/classified objects with respective confidence scores, obtaining adjusted confidence scores by adjusting the confidence scores output by the DNN based on distance ratios of (i) minimal distances of input histograms representing the input images to the K cluster centers and (ii) the respective maximum in-class.Type: ApplicationFiled: July 25, 2022Publication date: January 25, 2024Inventors: Dalong Li, Rohit S Paranjpe, Stephen Horton
-
Patent number: 11594040Abstract: Techniques for training multiple resolution deep neural networks (DNNs) for vehicle autonomous driving comprise obtaining a training dataset for training a plurality of DNNs for an autonomous driving feature of the vehicle, sub-sampling the training dataset to obtain a plurality of training datasets comprising the training dataset and one or more sub-sampled datasets each having a different resolution than a remainder of the plurality of training datasets, training the plurality of DNNs using the plurality of training datasets, respectively, determining a plurality of outputs for the autonomous driving feature using the plurality of trained DNNs and the input data, receiving input data for the autonomous driving feature captured by a sensor device, and determining a best output for the autonomous driving feature using the plurality of outputs.Type: GrantFiled: August 5, 2020Date of Patent: February 28, 2023Assignee: FCA US LLCInventors: Dalong Li, Stephen Horton, Neil R Garbacik
-
Patent number: 11584382Abstract: A vehicle includes a controller area network (CAN) and a plurality of a controllers in communication with each other via the CAN, wherein each controller of the plurality of controllers is configured to time-stamp messages transmitted via the CAN using a vehicle-wide synchronized clock, determine a worst-case transmission delay via the CAN based on the time-stamps for messages received from other controllers of the plurality of controllers, and based on the worse-case transmission delay, set a dynamic recovery timer for a malfunctioning controller of the plurality of controllers to recover after its malfunction, wherein the dynamic recovery timer prevents a particular controller that was malfunctioning but has since recovered from being incorrectly designated as a malfunctioning controller in need of service.Type: GrantFiled: February 12, 2021Date of Patent: February 21, 2023Assignee: FCA US LLCInventors: Mohammed S Huq, Stephen Horton
-
Patent number: 11543531Abstract: A semi-automatic three-dimensional light detection and ranging (LIDAR) point cloud data annotation system and method for autonomous driving of a vehicle involve filtering 3D LIDAR point cloud and normalizing the filtered 3D LIDAR point cloud data relative to the vehicle to obtain normalized 3D LIDAR point cloud data, quantizing the normalized 3D LIDAR point cloud data by dividing it into a set of 3D voxels, projecting the set of 3D voxels to a 2D birdview, identifying a possible object by applying clustering to the 2D birdview projection, obtaining an annotated 2D birdview projection including annotations by a human annotator via the annotation system regarding whether the bounding box corresponds to a confirmed object and a type of the confirmed object, and converting the annotated 2D birdview projection to back into annotated 3D LIDAR point cloud data.Type: GrantFiled: December 19, 2018Date of Patent: January 3, 2023Assignee: FCA US LLCInventors: Dalong Li, Alex Smith, Stephen Horton
-
Publication number: 20220258750Abstract: A vehicle includes a controller area network (CAN) and a plurality of a controllers in communication with each other via the CAN, wherein each controller of the plurality of controllers is configured to time-stamp messages transmitted via the CAN using a vehicle-wide synchronized clock, determine a worst-case transmission delay via the CAN based on the time-stamps for messages received from other controllers of the plurality of controllers, and based on the worse-case transmission delay, set a dynamic recovery timer for a malfunctioning controller of the plurality of controllers to recover after its malfunction, wherein the dynamic recovery timer prevents a particular controller that was malfunctioning but has since recovered from being incorrectly designated as a malfunctioning controller in need of service.Type: ApplicationFiled: February 12, 2021Publication date: August 18, 2022Inventors: Mohammed S. Huq, Stephen Horton
-
Publication number: 20220043152Abstract: Object detection and tracking techniques for a vehicle include accessing a deep neural network (DNN) trained for object detection, receiving, from a light detection and ranging (LIDAR) system of the vehicle, LIDAR point cloud data external to the vehicle, running the DNN on the LIDAR point cloud data at a first rate to detect a first set of objects and a region of interest (ROI) comprising the first set of objects, and depth clustering, by the controller, the LIDAR point cloud data for the detected ROI at a second rate to detect and track a second set of objects comprising the first set of objects and any objects that subsequently appear in a field of view of the LIDAR system, wherein the second rate is greater than the first rate, wherein the depth clustering continues until a subsequent second iteration of the DNN is run.Type: ApplicationFiled: August 5, 2020Publication date: February 10, 2022Inventors: Dalong Li, Ayyoub Rezaeian, Stephen Horton
-
Publication number: 20220044033Abstract: Techniques for training multiple resolution deep neural networks (DNNs) for vehicle autonomous driving comprise obtaining a training dataset for training a plurality of DNNs for an autonomous driving feature of the vehicle, sub-sampling the training dataset to obtain a plurality of training datasets comprising the training dataset and one or more sub-sampled datasets each having a different resolution than a remainder of the plurality of training datasets, training the plurality of DNNs using the plurality of training datasets, respectively, determining a plurality of outputs for the autonomous driving feature using the plurality of trained DNNs and the input data, receiving input data for the autonomous driving feature captured by a sensor device, and determining a best output for the autonomous driving feature using the plurality of outputs.Type: ApplicationFiled: August 5, 2020Publication date: February 10, 2022Inventors: Dalong Li, Stephen Horton, Neil R Garbacik
-
Publication number: 20210133947Abstract: An autonomous driving technique comprises determining an image quality metric for each image frame of a series of image frames of a scene outside of a vehicle captured by a camera system and determining an image quality threshold based on the image quality metrics for the series of image frames. The technique then determines whether the image quality metric for a current image frame satisfies the image quality threshold. When the image quality metric for the current image frame satisfies the image quality threshold, object detection is performed by at least utilizing a first deep neural network (DNN) with the current image frame. When the image quality metric for the current image frame fails to satisfy the image quality threshold, object detection is performed by utilizing a second, different DNN with the information captured by another sensor system and without utilizing the first DNN or the current image frame.Type: ApplicationFiled: October 31, 2019Publication date: May 6, 2021Inventors: Dalong Li, Stephen Horton, Neil R Garbacik
-
Patent number: 10983215Abstract: An advanced driver assistance system (ADAS) and method for a vehicle utilize a light detection and ranging (LIDAR) system configured to emit laser light pulses and capture reflected laser light pulses collectively forming three-dimensional (3D) LIDAR point cloud data and a controller configured to receive the 3D LIDAR point cloud data, convert the 3D LIDAR point cloud data to a two-dimensional (2D) birdview projection, obtain a template image for object detection, the template image being representative of a specific object, blur the 2D birdview projection and the template image to obtain a blurred 2D birdview projection and a blurred template image, and detect the specific object by matching a portion of the blurred 2D birdview projection to the blurred template image.Type: GrantFiled: December 19, 2018Date of Patent: April 20, 2021Assignee: FCA US LLCInventors: Dalong Li, Alex M Smith, Stephen Horton
-
Patent number: 10937186Abstract: Advanced driver assistance (ADAS) systems and methods for a vehicle comprise capturing an image using a monocular camera system of the vehicle and detecting a landmark in the image using a deep neural network (DNN) trained with labeled training data including generating a bounding box for the detected landmark, predicting a depth of pixels in the image using a convolutional neural network (CNN) trained with unlabeled training data captured by a stereo camera system, filtering noise by averaging predicted pixel depths for pixels in the region of the bounding box to obtain an average depth value for the detected landmark, determining a coordinate position of the detected landmark using its average depth value, and performing at least one ADAS feature using the determined coordinate position of the detected landmark.Type: GrantFiled: December 19, 2018Date of Patent: March 2, 2021Assignee: FCA US LLCInventors: Zijian Wang, Stephen Horton
-
Publication number: 20200349363Abstract: An advanced driver assistance system (ADAS) of a vehicle and associated method is disclosed. A first set of sensed lane measurements from a first imaging device and a second set of sensed lane measurements from a second imaging device are obtained. Each of the first and second sets of sensed lane measurements includes a lane estimate for the lane lines on a roadway. Each lane estimate is associated with one lane line. For each lane line, the associated lane estimates from the first and second sets of sensed lane measurements are fused to obtain a fused lane estimate, from which a representative model lane estimate is determined. For each of the plurality of lane lines, the associated lane estimates from the first and second sets of sensed lane measurements and the representative model lane estimate are fused to obtain a corrected fused lane estimate, which is output.Type: ApplicationFiled: May 2, 2019Publication date: November 5, 2020Inventors: Miguel Hurtado, Stephen Horton, Eliseo Miranda
-
Publication number: 20200202548Abstract: Advanced driver assistance (ADAS) systems and methods for a vehicle comprise capturing an image using a monocular camera system of the vehicle and detecting a landmark in the image using a deep neural network (DNN) trained with labeled training data including generating a bounding box for the detected landmark, predicting a depth of pixels in the image using a convolutional neural network (CNN) trained with unlabeled training data captured by a stereo camera system, filtering noise by averaging predicted pixel depths for pixels in the region of the bounding box to obtain an average depth value for the detected landmark, determining a coordinate position of the detected landmark using its average depth value, and performing at least one ADAS feature using the determined coordinate position of the detected landmark.Type: ApplicationFiled: December 19, 2018Publication date: June 25, 2020Inventors: Zijian Wang, Stephen Horton
-
Publication number: 20200200907Abstract: A semi-automatic three-dimensional light detection and ranging (LIDAR) point cloud data annotation system and method for autonomous driving of a vehicle involve filtering 3D LIDAR point cloud and normalizing the filtered 3D LIDAR point cloud data relative to the vehicle to obtain normalized 3D LIDAR point cloud data, quantizing the normalized 3D LIDAR point cloud data by dividing it into a set of 3D voxels, projecting the set of 3D voxels to a 2D birdview, identifying a possible object by applying clustering to the 2D birdview projection, obtaining an annotated 2D birdview projection including annotations by a human annotator via the annotation system regarding whether the bounding box corresponds to a confirmed object and a type of the confirmed object, and converting the annotated 2D birdview projection to back into annotated 3D LIDAR point cloud data.Type: ApplicationFiled: December 19, 2018Publication date: June 25, 2020Inventors: Dalong Li, Alex Smith, Stephen Horton
-
Publication number: 20200200906Abstract: An advanced driver assistance system (ADAS) and method for a vehicle utilize a light detection and ranging (LIDAR) system configured to emit laser light pulses and capture reflected laser light pulses collectively forming three-dimensional (3D) LIDAR point cloud data and a controller configured to receive the 3D LIDAR point cloud data, convert the 3D LIDAR point cloud data to a two-dimensional (2D) birdview projection, obtain a template image for object detection, the template image being representative of a specific object, blur the 2D birdview projection and the template image to obtain a blurred 2D birdview projection and a blurred template image, and detect the specific object by matching a portion of the blurred 2D birdview projection to the blurred template image.Type: ApplicationFiled: December 19, 2018Publication date: June 25, 2020Inventors: Dalong Li, Alex M Smith, Stephen Horton
-
Patent number: 8596541Abstract: This disclosure describes barcode scanning techniques for an image capture device. The image capture device may automatically detect a barcode within an image while the image capture device is operating in a non-barcode image capture mode, such a default image capture mode. In one aspect, the detection of the barcode within the image may be based on a combination of identified edges and low intensity regions within the image. The image capture device may configure, based on the detection of the barcode, one or more image capture properties associated with the image capture device to improve a quality at which the images are captured. The image capture device captures the image in accordance with the configured image capture properties. The techniques may effectively provide a universal and integrated front-end for producing improved quality images of barcodes without requiring significant interaction with a user via a complicated user interface.Type: GrantFiled: February 22, 2008Date of Patent: December 3, 2013Assignee: QUALCOMM IncorporatedInventors: Chinchuan Andrew Chiu, Jingqiang Li, Hau Hwang, Hsiang-Tsun Li, Stephen Horton, Xiaoyun Jiang, Joseph Cheung
-
Publication number: 20090239984Abstract: A thermoplastic alloy is disclosed comprising poly(vinyl halide) and an olefin-based uncrosslinked elastomer having thermoplastic properties. The alloy can be made into a polymeric skin using slush molding techniques.Type: ApplicationFiled: December 21, 2006Publication date: September 24, 2009Applicant: POLYONE CORPORATIONInventors: Stephen Horton, Brent Cassata
-
Publication number: 20090212113Abstract: This disclosure describes barcode scanning techniques for an image capture device. The image capture device may automatically detect a barcode within an image while the image capture device is operating in a non-barcode image capture mode, such a default image capture mode. In one aspect, the detection of the barcode within the image may be based on a combination of identified edges and low intensity regions within the image. The image capture device may configure, based on the detection of the barcode, one or more image capture properties associated with the image capture device to improve a quality at which the images are captured. The image capture device captures the image in accordance with the configured image capture properties. The techniques may effectively provide a universal and integrated front-end for producing improved quality images of barcodes without requiring significant interaction with a user via a complicated user interface.Type: ApplicationFiled: February 22, 2008Publication date: August 27, 2009Applicant: QUALCOMM INCORPORATEDInventors: Chinchuan Andrew Chiu, Jingqiang Li, Hau Hwang, Hsiang-Tsun Li, Stephen Horton, Xiaoyun Jiang, Joseph Cheung
-
Publication number: 20080275179Abstract: A low profile connector assembly comprises at least one contact having a surface mount portion and a wire engagement portion extending from the surface mount portion, and a housing insertable over the at least one contact and retained to the at least one contact. The housing encloses the wire engagement portion and has a wire receiving aperture therethrough. The wire receiving aperture provides access to the wire engagement portion when the housing is retained to the contact.Type: ApplicationFiled: February 21, 2006Publication date: November 6, 2008Applicant: POLYONE CORPORATIONInventors: Stephen Horton, Brent Cassata
-
Publication number: 20080004075Abstract: Methods and apparatus for enabling output from a mobile device are described herein. A mobile device having an image capture device can selectively generate a hard copy output of a captured image by interfacing with an output device. The mobile device can selectively interface with the output device directly or indirectly via one or more intermediate devices and/or systems. The mobile device can interface directly with an output device using a wired or wireless connection, and can selectively operate as a host or client. The mobile device can selectively couple the stored image to a remote output device via a wireless connection. The mobile device can select the remote output device from a predetermined list of devices, or can be supplied a dynamic list of remote output devices. The dynamic list of output devices can be updated, for example, based on a location of the mobile device.Type: ApplicationFiled: June 8, 2007Publication date: January 3, 2008Inventor: Stephen Horton
-
Publication number: 20070289879Abstract: A cathodic protection coating is disclosed that unexpectedly protects a treated metal substrate, notwithstanding the presence of a phosphate-containing conversion coating between the metal substrate and the cathodic protection compound. The cathodic protection coating includes sacrificial metallic particles less noble than metal in the metal substrate to be protected. The coating also includes inherently conductive polymer.Type: ApplicationFiled: September 22, 2005Publication date: December 20, 2007Applicant: POLYONE CORPORATIONInventor: Stephen Horton