Patents by Inventor Forrest Samuel Briggs
Forrest Samuel Briggs has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12246656Abstract: In one embodiment, a method includes receiving sets of measurement parameters associated with a sensed object and respectively captured by sensors. The method includes determining a relative orientation between the sensors based on comparison of the received sets of measurement parameters. The method includes determining one or more calibration factors based on comparing the determined relative orientation between sensors to an expected relative orientation between the sensors. The method includes applying at least one of the determined calibration factors to one or more of the measurement parameters captured by the sensors.Type: GrantFiled: January 5, 2024Date of Patent: March 11, 2025Assignee: Lyft, Inc.Inventor: Forrest Samuel Briggs
-
Publication number: 20240140331Abstract: In one embodiment, a method includes detecting multiple reference markers on a vehicle and multiple reference markers on a sensor array attached to the vehicle. The method includes determining a pose of the vehicle and a pose of the sensor array based on the detected reference markers on the vehicle and sensor array and a model of the vehicle and sensor array that includes expected locations of the detected reference markers on the vehicle and sensor array. The method includes computing an observed relative orientation between the sensor array and the vehicle based on a comparison of the determined pose of the sensor array and vehicle. The method includes determining a calibration factor for a sensor of the sensor array based on a comparison of the observed relative orientation between the sensor array and the vehicle to an expected relative orientation between the sensor array and the vehicle.Type: ApplicationFiled: January 5, 2024Publication date: May 2, 2024Inventor: Forrest Samuel Briggs
-
Patent number: 11878632Abstract: In one embodiment, a method includes, receiving a first set of positional parameters for movement of a vehicle through an environment captured by a first sensor associated with the vehicle. The method includes receiving a second set of positional parameters for movement of the vehicle through the environment captured by a second sensor associated with the vehicle. The method includes calculating an angular offset between the first set of positional parameters and the second set of positional parameters based on comparing the first set of positional parameters to the second set of positional parameters. The method includes determining a calibration factor based on the calculated angular offset. The method includes calibrating at least one of the first sensor or the second sensor by using the calibration factor.Type: GrantFiled: July 27, 2020Date of Patent: January 23, 2024Assignee: Lyft, Inc.Inventor: Forrest Samuel Briggs
-
Publication number: 20230068113Abstract: In one embodiment, a facility for calibrating sensors of an autonomous vehicle (AV) includes a camera calibration target configured to be measured by and used for calibrating an optical camera of the AV; a light detection and ranging (LiDAR) calibration target configured to be measured by and used for calibrating a LiDAR transceiver of the AV; and a platform configured to allow the AV to drive onto and park on the platform. The camera calibration target and the LiDAR calibration target are positioned to be detectable by the optical camera and the LiDAR transceiver while the AV is parked on the platform. The platform is further configured to modify a lateral position, height, or orientation of the optical camera and the LiDAR transceiver relative to the camera calibration target and the LiDAR calibration target while the AV is parked on the platform.Type: ApplicationFiled: August 30, 2022Publication date: March 2, 2023Inventors: Farzad Cyrus Foroughi Abari, Forrest Samuel Briggs, Alexander Thomas Starns
-
Patent number: 11570416Abstract: In one embodiment, a method includes accessing first image data generated by a first image sensor having a first filter array that has a first filter pattern. The first filter pattern includes a number of first filter types. The method also includes accessing second image data generated by a second image sensor having a second filter array that has a second filter pattern different from the first filter pattern. The second filter pattern includes a number of second filter types, the number of second filter types and the number of first filter types have at least one filter type in common. The method also includes determining a correspondence between one or more first pixels of the first image data and one or more second pixels of the second image data based on a portion of the first image data associated with the filter type in common.Type: GrantFiled: May 18, 2020Date of Patent: January 31, 2023Assignee: WOVEN PLANET NORTH AMERICA, INC.Inventors: Forrest Samuel Briggs, Romain Clément, Yi Zhou
-
Patent number: 11514371Abstract: In one embodiment, a computing system may receive an uncompressed image from a camera. The computing system may generate a compressed image by performing a compression process on the uncompressed image, wherein a decompressed image may be generated as a byproduct during the compression process. The computing system may send the decompressed image to a machine-learning model that was trained using decompressed images. The computing system may generate, by the machine-learning model, an output based on the decompressed image. The computing system may provide operational instructions to a vehicle based on the output.Type: GrantFiled: March 11, 2019Date of Patent: November 29, 2022Assignee: Woven Planet North America, Inc.Inventors: Forrest Samuel Briggs, James Allen-White Hoffacker, Dhruv Lamba, Yi Lu, Phillip Sawbridge
-
Patent number: 11435456Abstract: In one embodiment, a facility for calibrating sensors of an autonomous vehicle (AV) includes a camera calibration target configured to be measured by and used for calibrating an optical camera of the AV; a light detection and ranging (LiDAR) calibration target configured to be measured by and used for calibrating a LiDAR transceiver of the AV; and a platform configured to allow the AV to drive onto and park on the platform. The camera calibration target and the LiDAR calibration target are positioned to be detectable by the optical camera and the LiDAR transceiver while the AV is parked on the platform. The platform is further configured to modify a lateral position, height, or orientation of the optical camera and the LiDAR transceiver relative to the camera calibration target and the LiDAR calibration target while the AV is parked on the platform.Type: GrantFiled: December 28, 2017Date of Patent: September 6, 2022Assignee: Lyft, Inc.Inventors: Farzad Cyrus Foroughi Abari, Forrest Samuel Briggs, Alexander Thomas Starns
-
Patent number: 11415683Abstract: In one embodiment, a method includes receiving sensor data from one or more sensors of an autonomous vehicle (AV); determining that a first sensor of the one or more sensors needs recalibration based on the sensor data. The first sensor being of a first sensor type. The method also includes sending a request to a remote management system indicating that one or more of the sensors of the AV need recalibration and a location of the AV; determining the presence of a service vehicle having a calibration target configured to calibrate sensors of the first sensor type; and initiating a calibration routine using the calibration target.Type: GrantFiled: December 28, 2017Date of Patent: August 16, 2022Assignee: Lyft, Inc.Inventors: Farzad Cyrus Foroughi Abari, Forrest Samuel Briggs, Alexander Thomas Starns
-
Patent number: 11240404Abstract: In one embodiment, a method includes, by a computing device of a first sensor receiving synchronization information from a controller. The synchronization information being generated based on a clock of the controller. The method also includes determining, based on the synchronization information, a first offset between a first clock of the first sensor and the clock of the controller; storing the first offset; and synchronizing, based on the stored first offset and the first clock of the first sensor, a first data capture by the first sensor with a second data capture by a second sensor. The first data capture and the second data capture being requested by the controller.Type: GrantFiled: March 11, 2021Date of Patent: February 1, 2022Assignee: Woven Planet North America, Inc.Inventors: Corey Frederick Bangs, Forrest Samuel Briggs, George James Hansel, James Allen-White Hoffacker, Dhruv Lamba, Chi Hoon Lee, Yi Lu, Brian Thomas McGinn, Phillip Sawbridge
-
Patent number: 11115646Abstract: In particular embodiments, a computing system may detect a number of objects captured within an overlapping region between a first field of view associated with a first camera and a second field of view associated with a second camera. The system may determine a respective priority ranking for each of the objects. The system may select an object from the objects based on the respective priority ranking for the object. The system may determine, for the first camera, a first lighting condition associated with the first field of view. The system may determine, for the second camera, a second lighting condition associated with the second field of view. The system may determine a shared exposure time for the selected object based on the first lighting condition and the second lighting condition. The system may cause at least one image of the selected object to be captured using the shared exposure time.Type: GrantFiled: October 5, 2020Date of Patent: September 7, 2021Assignee: Woven Planet North America, Inc.Inventors: Forrest Samuel Briggs, James Allen-White Hoffacker, Dhruv Lamba, Phillip Sawbridge
-
Publication number: 20210274064Abstract: In one embodiment, a method includes, by a computing device of a first sensor receiving synchronization information from a controller. The synchronization information being generated based on a clock of the controller. The method also includes determining, based on the synchronization information, a first offset between a first clock of the first sensor and the clock of the controller; storing the first offset; and synchronizing, based on the stored first offset and the first clock of the first sensor, a first data capture by the first sensor with a second data capture by a second sensor. The first data capture and the second data capture being requested by the controller.Type: ApplicationFiled: March 11, 2021Publication date: September 2, 2021Inventors: Corey Frederick Bangs, Forrest Samuel Briggs, George James Hansel, James Allen-White Hoffacker, Dhruv Lamba, Chi Hoon Lee, Yi Lu, Brian Thomas McGinn, Phillip Sawbridge
-
Patent number: 11105905Abstract: A method includes capturing, by a plurality of image sensors on an automotive vehicle, image data associated with one or more calibration objects in an environment, and capturing, by a LiDAR sensor, a three-dimensional LiDAR point cloud based on LiDAR data. The method further comprises generating a three-dimensional image point cloud based on the image data and the three-dimensional LiDAR point cloud, mapping a first alignment plane of the three-dimensional image point cloud relative to a second alignment plane of the three-dimensional LiDAR point cloud for each of the calibration objects to determine an angle between the first alignment plane and second alignment plane, and calibrating the LiDAR sensor relative to the image sensors by determining a degree of rotation of the LiDAR sensor to minimize the angle between the first alignment plane and second alignment plane.Type: GrantFiled: November 30, 2018Date of Patent: August 31, 2021Assignee: Lyft, Inc.Inventors: Forrest Samuel Briggs, Lei Zhang
-
Patent number: 10972637Abstract: In one embodiment, a method includes, by a computing device of a first sensor receiving synchronization information from a controller. The synchronization information being generated based on a clock of the controller. The method also includes determining, based on the synchronization information, a first offset between a first clock of the first sensor and the clock of the controller; storing the first offset; and synchronizing, based on the stored first offset and the first clock of the first sensor, a first data capture by the first sensor with a second data capture by a second sensor. The first data capture and the second data capture being requested by the controller.Type: GrantFiled: March 11, 2019Date of Patent: April 6, 2021Assignee: Lyft, Inc.Inventors: Corey Frederick Bangs, Forrest Samuel Briggs, George James Hansel, James Allen-White Hoffacker, Dhruv Lamba, Chi Hoon Lee, Yi Lu, Brian Thomas McGinn, Phillip Sawbridge
-
Publication number: 20210092349Abstract: In particular embodiments, a computing system may detect a number of objects captured within on overlapping region between a first field of view associated with a first camera and a second field of view associated with a second camera. The system may determine a respective priority ranking for each of the objects. The system may select an object from the objects based on the respective priority ranking for the object. The system may determine, for the first camera, a first lighting condition associated with the first field of view. The system may determine, for the second camera, a second lighting condition associated with the second field of view. The system may determine a shared exposure time for the selected object based on the first lighting condition and the second lighting condition. The system may cause at least one image of the selected object to be captured using the shared exposure time.Type: ApplicationFiled: October 5, 2020Publication date: March 25, 2021Inventors: Forrest Samuel Briggs, James Allen-White Hoffacker, Dhruv Lamba, Phillip Sawbridge
-
Publication number: 20200404224Abstract: In one embodiment, a method includes accessing first image data generated by a first image sensor having a first filter array that has a first filter pattern. The first filter pattern includes a first filter type corresponding to a spectrum of interest and a second filter type. The method also includes accessing second image data generated by a second image sensor having a second filter array that has a second filter pattern different from the first filter pattern. The second filter pattern includes a number of second filter types, the number of second filter types and the number of first filter types have at least one filter type in common. The method also includes determining a correspondence between one or more first pixels of the first image data and one or more second pixels of the second image data.Type: ApplicationFiled: July 6, 2020Publication date: December 24, 2020Inventors: Forrest Samuel Briggs, Romain Clément, Yi Zhou
-
Publication number: 20200353878Abstract: In one embodiment, a method includes, receiving a first set of positional parameters for movement of a vehicle through an environment captured by a first sensor associated with the vehicle. The method includes receiving a second set of positional parameters for movement of the vehicle through the environment captured by a second sensor associated with the vehicle. The method includes calculating an angular offset between the first set of positional parameters and the second set of positional parameters based on comparing the first set of positional parameters to the second set of positional parameters. The method includes determining a calibration factor based on the calculated angular offset. The method includes calibrating at least one of the first sensor or the second sensor by using the calibration factor.Type: ApplicationFiled: July 27, 2020Publication date: November 12, 2020Inventor: Forrest Samuel Briggs
-
Patent number: 10798368Abstract: In one embodiment, a computing system may determine a first target region within a first field of view of a first camera and a second target region within a second field of view of a second camera. The first field of view and the second field of view may be partially overlapping. The system may determine first lighting conditions of the first target region. The system may determine a first exposure time for at least the first camera and the second camera based at least in part on the determined first lighting conditions. The system may instruct the first camera and the second camera to take pictures using the determined first exposure time.Type: GrantFiled: March 11, 2019Date of Patent: October 6, 2020Assignee: Lyft, Inc.Inventors: Forrest Samuel Briggs, James Allen-White Hoffacker, Dhruv Lamba, Phillip Sawbridge
-
Publication number: 20200280707Abstract: In one embodiment, a method includes accessing first image data generated by a first image sensor having a first filter array that has a first filter pattern. The first filter pattern includes a number of first filter types. The method also includes accessing second image data generated by a second image sensor having a second filter array that has a second filter pattern different from the first filter pattern. The second filter pattern includes a number of second filter types, the number of second filter types and the number of first filter types have at least one filter type in common. The method also includes determining a correspondence between one or more first pixels of the first image data and one or more second pixels of the second image data based on a portion of the first image data associated with the filter type in common.Type: ApplicationFiled: May 18, 2020Publication date: September 3, 2020Inventors: Forrest Samuel Briggs, Romain Clément, Yi Zhou
-
Patent number: 10733786Abstract: As user device can receive and display 360 panoramic content in a 360 depth format. 360 depth content can comprise 360 panoramic image data and corresponding depth information. To display 360 depth content, the user device can generate a 3D environment based on the 360 depth content and the current user viewpoint. A content display module on the user device can render 360 depth content using a standard 3D rendering pipeline modified to render 360 depth content. The content display module can use a vertex shader or fragment shader of the 3D rendering pipeline to interpret the depth information of the 360 depth content into the 3D environment as it is rendered.Type: GrantFiled: July 20, 2018Date of Patent: August 4, 2020Assignee: Facebook, Inc.Inventor: Forrest Samuel Briggs
-
Patent number: 10723281Abstract: In one embodiment, a method includes detecting multiple reference markers on a vehicle and multiple reference markers on a sensor array attached to the vehicle. The method includes determining a pose of the vehicle and a pose of the sensor array based on the detected reference markers on the vehicle and sensor array and a model of the vehicle and sensor array that includes expected locations of the detected reference markers on the vehicle and sensor array. The method includes computing an observed relative orientation between the sensor array and the vehicle based on a comparison of the determined pose of the sensor array and vehicle. The method includes determining a calibration factor for a sensor of the sensor array based on a comparison of the observed relative orientation between the sensor array and the vehicle to an expected relative orientation between the sensor array and the vehicle.Type: GrantFiled: March 21, 2019Date of Patent: July 28, 2020Assignee: Lyft, Inc.Inventor: Forrest Samuel Briggs