Patents by Inventor Sihao DING
Sihao DING has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11948327Abstract: A method and system by which a bounding box disposed around a segmented object in a camera (or other perception sensor) 2D image can be used to produce an estimate for both the location of the object—its position relative to the position of the camera that obtained the image (i.e., translation)—and the angle of rotation of the surface that the object is located on. The method and system may be used by an advanced driver assistance system (ADAS), an autonomous driving (AD) system, or the like. The input includes a simple camera (or other perception sensor) 2D image, with the ego vehicle generating 2D or 3D bounding boxes for objects detected at the scene. The output includes, for each object, its estimated distance from the ego vehicle camera/perception sensor and the angle of rotation of the surface underneath the object relative to the surface underneath the ego vehicle.Type: GrantFiled: June 7, 2021Date of Patent: April 2, 2024Assignee: Volvo Car CorporationInventors: Roman Akulshin, Sihao Ding
-
Patent number: 11921212Abstract: A LIDAR-based method of determining an absolute speed of an object at a relatively longer distance from an ego vehicle, including: estimating a self speed of the ego vehicle using a first frame t?1 and a second frame t obtained from a LIDAR sensor by estimating an intervening rotation ? about a z axis and translation in orthogonal x and y directions using a deep learning algorithm over a relatively closer distance range; dividing each of the first frame t?1 and the second frame t into multiple adjacent input ranges and estimating a relative speed of the object at the relatively longer distance by subsequently processing each frame using a network, with each input range processed using a corresponding convolutional neural network; and combining the estimation of the estimating the self speed with the estimation of the estimating the relative speed to obtain an estimation of the absolute speed.Type: GrantFiled: November 17, 2022Date of Patent: March 5, 2024Assignee: Volvo Car CorporationInventors: Sihao Ding, Sohini Roy Chowdhury, Minming Zhao, Ying Li
-
Publication number: 20240025416Abstract: A system and method, including: a data input device coupled to a vehicle, wherein the data input device is operable for gathering input data related to one or more of an environmental context surrounding or within the vehicle, an occupant state within the vehicle, a vehicle state, a location of the vehicle, and a noise condition inside or outside of the vehicle; an artificial intelligence system coupled to the data input device, wherein the artificial intelligence system is operable for generating a sound qualifier based on the gathered input data related to the one or more of the environmental context surrounding or within the vehicle, the occupant state within the vehicle, the vehicle state, and the noise condition inside or outside of the vehicle; and an audio generation module operable for receiving the sound qualifier from the artificial intelligence system and synthesizing a soundscape based on the sound qualifier.Type: ApplicationFiled: July 21, 2022Publication date: January 25, 2024Inventors: Jon Seneger, Sihao Ding, Peter Winzell
-
Publication number: 20230400306Abstract: One or more embodiments herein can provide a process to determine dead-reckoning localization of a vehicle. An exemplary system can comprise a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory, wherein the computer executable components can comprise an obtaining component that obtains plural sensor readings defining movement of a vehicle, wherein the plural sensor readings comprise an inertial sensor reading, a kinematics sensor reading, and an odometry sensor reading, and a generation component that generates, based on the plural sensor readings, a pose value defining a position of the vehicle relative to an environment in which the vehicle is disposed. A sensing sub-system of the exemplary system can comprise an inertial measurement unit sensor, a kinematics sensor, and an odometry sensor.Type: ApplicationFiled: June 14, 2022Publication date: December 14, 2023Inventors: XINKAI ZHANG, SIHAO DING, HARSHAVARDHAN REDDY DASARI
-
Publication number: 20230394805Abstract: One or more embodiments herein can enable identification of an obstacle free area about an object. An exemplary system can comprise a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory, wherein the computer executable components can comprise an obtaining component that obtains raw data defining a physical state of an environment around an object from a vantage of the object, and a generation component that, based on the raw data, generates a dimension of a portion or more of a virtual polygon representing a boundary about the object, wherein the boundary bounds free space about the object. A sensing sub-system can comprise both an ultrasonic sensor and a camera that can separately sense the environment about the object from the vantage of the object to thereby generate separate polygon measurement sets.Type: ApplicationFiled: June 7, 2022Publication date: December 7, 2023Inventors: Xinkai Zhang, Sihao Ding, Harshavardhan Reddy Dasari
-
Patent number: 11816179Abstract: A method, software tool, and system for generating realistic synthetic ride-requests associated with a mobility or transportation service, including: utilizing a generative adversarial network, learning the spatial-temporal distribution of a plurality of real ride-requests; and, utilizing the generative adversarial network and based on the learning step, generating one or more synthetic source and destination ride-request geolocations that retain a statistical distribution of the plurality of real ride-requests. The generative adversarial network is a Wasserstein generative adversarial network.Type: GrantFiled: March 25, 2020Date of Patent: November 14, 2023Assignee: Volvo Car CorporationInventors: Usha Nookala, Ebrahim Alareqi, Sihao Ding, Shanmukesh Vankayala
-
Publication number: 20230360279Abstract: Techniques are described for generating bird's eye view (BEV) images and segmentation maps. According to one or more embodiments, a system is provided comprising a processor that executes computer executable components stored in at least one memory, comprising a machine learning component that generates a synthesized bird's eye view image from a stitched image based on removing artifacts from the stitched image present from a transformation process. The system further comprising a generator that produces the synthesized bird's eye view image and a segmentation map, and a discriminator that predicts whether the synthesized bird's eye view image and the segmentation map are real or generated.Type: ApplicationFiled: May 9, 2022Publication date: November 9, 2023Inventors: Sihao Ding, Ekta U. Samani
-
Patent number: 11812153Abstract: Systems and methods for fisheye camera calibration and BEV image generation in a simulation environment. This fisheye camera calibration enables the extrinsic and intrinsic parameters of the fisheye camera to be computed in the simulation environment, where data is readily available, collectible, and manipulatable. Given a surround vision system, with multiple fisheye cameras disposed around a vehicle, and these extrinsic and intrinsic parameters, undistorted and BEV images of the surroundings of the vehicle can be generated in the simulated environment, for simulated fisheye camera testing and validation, which may then be extrapolated to real-world fisheye camera testing and validation, as appropriate. Because the simulation tool can be used to create and readily manipulate the simulated fisheye camera, the vehicle, its surroundings, obstacles, targets, markers, and the like, the entire calibration and image generation process is streamlined and may be automated.Type: GrantFiled: March 7, 2022Date of Patent: November 7, 2023Assignee: Volvo Car CorporationInventors: Zhipeng Liu, Chihiro Suga, Sihao Ding
-
Patent number: 11787334Abstract: The present invention relates to a driver assistance (DA) system and method for vehicle flank safety. This system and method also have utility in autonomous driving (AD) applications. The system and method optionally pair one or more cameras with one or more proximity sensors to provide free space awareness and contact avoidance related to the flank of a vehicle, providing a virtual distance grid overlay on one or more camera views available to an operator, predefined safety boundary intrusion detection under low-speed maneuvering conditions, door-open obstruction and danger detection, and a flank illumination system.Type: GrantFiled: September 4, 2019Date of Patent: October 17, 2023Assignee: Volvo Car CorporationInventors: Sihao Ding, Andreas Wallin, Megha Maheshwari
-
Patent number: 11792382Abstract: A stereo pair camera system for depth estimation, including: a first camera disposed in a first position along a longitudinal axis, a lateral axis, and a vertical axis and having a first field of view; and a second camera disposed in a second position along the longitudinal axis, the lateral axis, and the vertical axis and having a second field of view; wherein the first camera is a of a first type and the second camera is of a second type that is different from the first type; and wherein the first field of view overlaps with the second field of view. Optionally, the first position is spaced apart from the second position along one or more of the longitudinal axis and the vertical axis. The depth estimation is used by one or more of a driver assist system and an autonomous driving system of a vehicle.Type: GrantFiled: July 6, 2021Date of Patent: October 17, 2023Assignee: Volvo Car CorporationInventor: Sihao Ding
-
Publication number: 20230294671Abstract: Systems, devices, computer-implemented methods, and/or computer program products that can facilitate free space detection using fast sensor fusion are provided. In one example, a system can comprise a processor that executes computer executable components stored in memory. The computer executable components can comprise a threshold component, a pixel fusion component, and a frame control component. The threshold component can update the count of a plurality of pixel counters based on confidence scores of at least one camera and at least one sensor in a single pixel. The pixel fusion component can perform sensor fusion on the confidence scores based on a determination that the count of the plurality of pixel counters is less than a defined threshold. The frame control component can bypass the sensor fusion on the confidence scores based on a determination that the count of the plurality of pixel counters is greater than a defined threshold.Type: ApplicationFiled: March 21, 2022Publication date: September 21, 2023Inventors: Ying Li, Sihao Ding
-
Publication number: 20230290162Abstract: Systems, devices, computer-implemented methods, and/or computer program products that can facilitate pedestrian detection via a boundary cylinder model are addressed. In one example, a system can comprise a processor that executes computer executable components stored in memory. The computer-executable components can comprise a bounding cylinder model component that determines numerical values for height and radius parameters of a three-dimensional bounding cylinder model representing a pedestrian, and that generates the three-dimensional bounding cylinder model representing the pedestrian based on the numerical values.Type: ApplicationFiled: March 9, 2022Publication date: September 14, 2023Inventors: Sihao Ding, Jui-che Tsai
-
Publication number: 20230283906Abstract: Systems and methods for fisheye camera calibration and BEV image generation in a simulation environment. This fisheye camera calibration enables the extrinsic and intrinsic parameters of the fisheye camera to be computed in the simulation environment, where data is readily available, collectible, and manipulatable. Given a surround vision system, with multiple fisheye cameras disposed around a vehicle, and these extrinsic and intrinsic parameters, undistorted and BEV images of the surroundings of the vehicle can be generated in the simulated environment, for simulated fisheye camera testing and validation, which may then be extrapolated to real-world fisheye camera testing and validation, as appropriate. Because the simulation tool can be used to create and readily manipulate the simulated fisheye camera, the vehicle, its surroundings, obstacles, targets, markers, and the like, the entire calibration and image generation process is streamlined and may be automated.Type: ApplicationFiled: March 7, 2022Publication date: September 7, 2023Inventors: Zhipeng Liu, Chihiro Suga, Sihao Ding
-
Publication number: 20230281852Abstract: Methods and systems for unsupervised depth estimation for fisheye cameras using spatial-temporal (and, optionally, modal) consistency. This unsupervised depth estimation works directly on raw, distorted stereo fisheye images, such as those obtained from the four fisheye camera disposed around a vehicle in rigid alignment. Temporal consistency involves training a depth estimation model using a sequence of frames as input, while spatial consistency involves training the depth estimation model using overlapping images from synchronized stereo camera pairs. Images from different stereo camera pairs can also be used at different times. Modal consistency, when applied, dictates that different sensor types (e.g., camera, lidar, etc.) must also agree. The methods and systems of the present disclosure utilize a fisheye camera projection model that projects a disparity map into a point cloud map, which aides in the rectification of stereo pairs.Type: ApplicationFiled: March 2, 2022Publication date: September 7, 2023Inventors: Sihao Ding, Jianhe Yuan
-
Patent number: 11719796Abstract: The present disclosure provides a system and method for removing noise from an ultrasonic signal using a generative adversarial network (GAN). The present disclosure provides three input formats for the neural network (NN) in order to feed one-dimensional (1D) input data to the network. The system is generalizable to multiple noise sources, as it learns from different motion functions and noise types. The end-to-end system of the present disclosure is trained on raw ultrasonic signals with very little pre-processing or feature extraction.Type: GrantFiled: December 31, 2020Date of Patent: August 8, 2023Assignee: Volvo Car CorporationInventors: Ying Li, Usha Nookala, Sihao Ding
-
Publication number: 20230152446Abstract: The present disclosure adds automotive millimeter-wave (mmWave) radars to the current perception system and makes them an effective augmentation for ultrasonic sensors (USSs). Relying on the superior range and Doppler resolution, mmWave radars generate denser point clouds than the intersection detections of USSs, making it possible to form a more robust and accurate radar occupancy grid. The radar occupancy grid can be formulated as a polygon with multiple nodes. Thus,, the memory-consuming occupancy grid is simplified as a polygon that consists of a bunch of points, which can be used in the downstream application for relieving computational burden. Radars measure and estimate the Doppler velocity of detected targets such that one can assign a moving velocity to each node of the radar polygon. This makes it possible to predict the shape of a future radar polygon and feed the predicted radar polygon to downstream applications.Type: ApplicationFiled: November 4, 2022Publication date: May 18, 2023Inventors: Sihao Ding, Xiangyu Gao, Harshavardhan Reddy Dasari
-
Publication number: 20230141590Abstract: A system and method for USS reading enhancement using a lidar point cloud. This provides noise reduction and enables the generation of a 2D environmental map. More specifically, the present disclosure provides a system and method for generating an enhanced environmental map using USSs, and the map is enhanced using a lidar point cloud. Using the lidar point cloud has advantages because the lidar point cloud is accurate and thus can provide accurate labels for training and the like.Type: ApplicationFiled: October 25, 2022Publication date: May 11, 2023Inventors: Ying Li, Zhipeng Liu, Sihao Ding
-
Publication number: 20230131553Abstract: A system and method of planning a path for an autonomous vehicle from the vehicle's initial configuration to the goal configuration which is collision-free, kinematically-feasible, and near-minimal in length. The vehicle is equipped with a plurality of perception sensors such as lidar, camera, etc., and is configured to operate in an autonomous mode. An onboard computing device is configured to process the sensor data and provide a dynamic occupancy grid map of the surrounding environment in real-time. Based on the occupancy grid map, the path planner can quickly calculate a collision-free and dynamically feasible path towards the goal configuration for the vehicle to follow.Type: ApplicationFiled: October 27, 2021Publication date: April 27, 2023Inventors: Xinkai Zhang, Sihao Ding
-
Publication number: 20230071940Abstract: A LIDAR-based method of determining an absolute speed of an object at a relatively longer distance from an ego vehicle, including: estimating a self speed of the ego vehicle using a first frame t-1 and a second frame t obtained from a LIDAR sensor by estimating an intervening rotation ? about a z axis and translation in orthogonal x and y directions using a deep learning algorithm over a relatively closer distance range; dividing each of the first frame t-1 and the second frame t into multiple adjacent input ranges and estimating a relative speed of the object at the relatively longer distance by subsequently processing each frame using a network, with each input range processed using a corresponding convolutional neural network; and combining the estimation of the estimating the self speed with the estimation of the estimating the relative speed to obtain an estimation of the absolute speed.Type: ApplicationFiled: November 17, 2022Publication date: March 9, 2023Inventors: Sihao DING, Sohini ROY CHOWDHURY, Minming ZHAO, Ying LI
-
Publication number: 20230008027Abstract: A stereo pair camera system for depth estimation, including: a first camera disposed in a first position along a longitudinal axis, a lateral axis, and a vertical axis and having a first field of view; and a second camera disposed in a second position along the longitudinal axis, the lateral axis, and the vertical axis and having a second field of view; wherein the first camera is a of a first type and the second camera is of a second type that is different from the first type; and wherein the first field of view overlaps with the second field of view. Optionally, the first position is spaced apart from the second position along one or more of the longitudinal axis and the vertical axis. The depth estimation is used by one or more of a driver assist system and an autonomous driving system of a vehicle.Type: ApplicationFiled: July 6, 2021Publication date: January 12, 2023Inventor: Sihao Ding