Patents by Inventor Gang Pan
Gang Pan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12380601Abstract: In various examples, sensor configuration for autonomous or semi-autonomous systems and applications is described. Systems and methods are disclosed that may use image feature correspondences between camera images along with an assumption that image features are locally planar to determine parameters for calibrating an image sensor with a LiDAR sensor and/or another image sensor. In some examples, an optimization problem is constructed that attempts to minimize a geometric loss function, where the geometric loss function encodes the notion that corresponding image features are views of a same point on a locally planar surface (e.g., a surfel or mesh) that is constructed from LiDAR data generated using a LiDAR sensor. In some examples, performing such processes to determine the calibration parameters may remove structure estimation from the optimization problem.Type: GrantFiled: February 8, 2023Date of Patent: August 5, 2025Assignee: NVIDIA CorporationInventors: Ayon Sen, Gang Pan, Cheng-Chieh Yang, Yue Wu
-
Publication number: 20250213889Abstract: A particle accelerator includes an acceleration cavity chain and at least one energy switch side cavity. The energy switch side cavity is communicated with two adjacent main cavities in the acceleration cavity chain through coupling openings respectively. The energy switch side cavity includes a pair of asymmetric side cavity noses arranged therein and an energy switch probe arranged at a top position thereof. The energy switch probe is configured to extend from the top position of the energy switch side cavity to a space between the pair of side cavity noses.Type: ApplicationFiled: December 30, 2024Publication date: July 3, 2025Inventors: Zhaolong Zhang, Peng Wang, Gang Pan, Zhiguo Ma, Zexu Du
-
Patent number: 12344270Abstract: In various examples, a hazard detection system plots hazard indicators from multiple detection sensors to grid cells of an occupancy grid corresponding to a driving environment. For example, as the ego-machine travels along a roadway, one or more sensors of the ego-machine may capture sensor data representing the driving environment. A system of the ego-machine may then analyze the sensor data to determine the existence and/or location of the one or more hazards within an occupancy grid—and thus within the environment. When a hazard is detected using a respective sensor, the system may plot an indicator of the hazard to one or more grid cells that correspond to the detected location of the hazard. Based, at least in part, on a fused or combined confidence of the hazard indicators for each grid cell, the system may predict whether the corresponding grid cell is occupied by a hazard.Type: GrantFiled: March 15, 2022Date of Patent: July 1, 2025Assignee: NVIDIA CorporationInventors: Sangmin Oh, Baris Evrim Demiroz, Gang Pan, Dong Zhang, Joachim Pehserl, Samuel Rupp Ogden, Tae Eun Choe
-
Publication number: 20250200805Abstract: In various examples, sensor configuration for autonomous or semi-autonomous systems and applications is described. Systems and methods are disclosed that may use image feature correspondences between camera images along with an assumption that image features are locally planar to determine parameters for calibrating an image sensor with a LiDAR sensor and/or another image sensor. In some examples, an optimization problem is constructed that attempts to minimize a geometric loss function, where the geometric loss function encodes the notion that corresponding image features are views of a same point on a locally planar surface (e.g., a surfel or mesh) that is constructed from LiDAR data generated using a LiDAR sensor. In some examples, performing such processes to determine the calibration parameters may remove structure estimation from the optimization problem.Type: ApplicationFiled: March 4, 2025Publication date: June 19, 2025Inventors: Ayon Sen, Gang Pan, Cheng-Chieh Yang, Yue Wu
-
Patent number: 12328249Abstract: A method and a device of intra-chip routing of neural tasks for an operating system of a brain-inspired computer are provided. The method includes determining an area defined by target cores, and determining target cores in a row furthest from an edge routing area; determining whether the target cores need to be configured with relay routing; searching nearest edge routing cores in the edge routing area for all the target cores in the area defined by the target cores; configuring the target cores in a far-to-near principle; and searching relay routing cores and the nearest edge routing cores by a shortest path manner and a maximum step length of a single routing manner.Type: GrantFiled: February 14, 2023Date of Patent: June 10, 2025Assignees: ZHEJIANG LAB, ZHEJIANG UNIVERSITYInventors: Fengjuan Wang, Pan Lv, Min Kang, Shuiguang Deng, Ying Li, Gang Pan
-
Publication number: 20250182435Abstract: In various examples, detecting occluded objects within images or other sensor data representations for autonomous or semi-autonomous systems and applications is described herein. Systems and methods described herein may determine when objects are occluded at portions of images using various techniques. For example, an image may be processed in order to determine classifications associated with objects depicted by the image and, the classifications, along with labels that are projected on the image using a map, may then be used to determine whether one or more of the objects are occluded in the image. For another example, a map may be used to determine first distances to points within an environment and a point cloud may be used to determine second distances to the points within the environment. The distances may then be used to determine whether one or more objects are occluded within the image.Type: ApplicationFiled: November 30, 2023Publication date: June 5, 2025Inventors: Anurag Singh, Sakib Ar Rahman, Tao Fu, Yifang Xu, Gang Pan
-
Publication number: 20250180736Abstract: In various examples, a hazard detection system fuses outputs from multiple sensors over time to determine a probability that a stationary object or hazard exists at a location. The system may then use sensor data to calculate a detection bounding shape for detected objects and, using the bounding shape, may generate a set of particles, each including a confidence value that an object exists at a corresponding location. The system may then capture additional sensor data by one or more sensors of the ego-machine that are different from those used to capture the first sensor data. To improve the accuracy of the confidences of the particles, the system may determine a correspondence between the first sensor data and the additional sensor data (e.g., depth sensor data), which may be used to filter out a portion of the particles and improve the depth predictions corresponding to the object.Type: ApplicationFiled: January 27, 2025Publication date: June 5, 2025Inventors: Gang Pan, Joachim Pehserl, Dong Zhang, Baris Evrim Demiroz, Samuel Rupp Ogden, Tae Eun Choe, Sangmin Oh
-
Publication number: 20250176094Abstract: The present disclosure relates to an accelerating apparatus for a radiation device. The accelerating apparatus may include a plurality of acceleration cavity units including a plurality of acceleration cavities. Each of the plurality of acceleration cavity units may be configured to accelerate a radiation beam passing through an acceleration cavity. And the accelerating apparatus may further include a plurality of coupling cavity units each of which may include a coupling cavity. Two adjacent acceleration cavities may be electromagnetically coupled via the coupling cavity. The plurality of acceleration cavity units may have a plurality of holes each of which may be configured to be in fluidic communication with the corresponding coupling cavity. And an edge region of each of at least a portion of the plurality of holes may include continuously varying curvatures.Type: ApplicationFiled: January 18, 2025Publication date: May 29, 2025Applicant: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.Inventors: Gang PAN, Shoubo HE, Ruiying SONG
-
Patent number: 12315187Abstract: A system and method for performing visual odometry is disclosed. In aspects, the system implements methods to generate an image pyramid based on an input image received. A refined pose prior information representing a location and orientation of the autonomous vehicle can be generated based on one or more images of the image pyramid. One or more seed points can be selected from the one or more images of the image pyramid. One or more refined seed representing the one or more seed points with added depth values can be generated. One or more scene points can be generated based on the one or more refined seed points. A point cloud can be generated based on the one or more scene points.Type: GrantFiled: September 24, 2021Date of Patent: May 27, 2025Assignee: VOLKSWAGEN GROUP OF AMERICA INVESTMENTS, LLCInventors: Weizhao Shao, Ankit Kumar Jain, Xxx Xinjilefu, Gang Pan, Brendan Christopher Byrne
-
Publication number: 20250165760Abstract: A neural network on-chip mapping method and apparatus based on a tabu search algorithm are provided. The method includes: constructing a tabu search table and using a heuristic-based iterative search process to select local computing cores of a network-on-chip as candidates, establishing an integer programming model and solving an optimal solution, continuously reducing an objective cost function of a mapping solution by loop iteration, and finally obtaining an approximately optimal deployment scheme.Type: ApplicationFiled: July 31, 2023Publication date: May 22, 2025Inventors: Yukun HE, De MA, Ying LI, Shichun SUN, Ming ZHANG, Xiaofei JIN, Guoquan ZHU, Fangchao YANG, Pan LV, Shuiguang DENG, Gang PAN
-
Patent number: 12288363Abstract: In various examples, sensor configuration for autonomous or semi-autonomous systems and applications is described. Systems and methods are disclosed that may use image feature correspondences between camera images along with an assumption that image features are locally planar to determine parameters for calibrating an image sensor with a LiDAR sensor and/or another image sensor. In some examples, an optimization problem is constructed that attempts to minimize a geometric loss function, where the geometric loss function encodes the notion that corresponding image features are views of a same point on a locally planar surface (e.g., a surfel or mesh) that is constructed from LiDAR data generated using a LiDAR sensor. In some examples, performing such processes to determine the calibration parameters may remove structure estimation from the optimization problem.Type: GrantFiled: February 8, 2023Date of Patent: April 29, 2025Assignee: NVIDIA CorporationInventors: Ayon Sen, Gang Pan, Cheng-Chieh Yang, Yue Wu
-
Publication number: 20250091607Abstract: In various examples, a 3D surface structure such as the 3D surface structure of a road (3D road surface) may be observed and estimated to generate a 3D point cloud or other representation of the 3D surface structure. Since the estimated representation may be sparse, a deep neural network (DNN) may be used to predict values for a dense representation of the 3D surface structure from the sparse representation. For example, a sparse 3D point cloud may be projected to form a sparse projection image (e.g., a sparse 2D height map), which may be fed into the DNN to predict a dense projection image (e.g., a dense 2D height map). The predicted dense representation of the 3D surface structure may be provided to an autonomous vehicle drive stack to enable safe and comfortable planning and control of the autonomous vehicle.Type: ApplicationFiled: December 6, 2024Publication date: March 20, 2025Inventors: Kang Wang, Yue Wu, Minwoo Park, Gang Pan
-
Patent number: 12235353Abstract: In various examples, a hazard detection system fuses outputs from multiple sensors over time to determine a probability that a stationary object or hazard exists at a location. The system may then use sensor data to calculate a detection bounding shape for detected objects and, using the bounding shape, may generate a set of particles, each including a confidence value that an object exists at a corresponding location. The system may then capture additional sensor data by one or more sensors of the ego-machine that are different from those used to capture the first sensor data. To improve the accuracy of the confidences of the particles, the system may determine a correspondence between the first sensor data and the additional sensor data (e.g., depth sensor data), which may be used to filter out a portion of the particles and improve the depth predictions corresponding to the object.Type: GrantFiled: November 10, 2021Date of Patent: February 25, 2025Assignee: NVIDIA CorporationInventors: Gang Pan, Joachim Pehserl, Dong Zhang, Baris Evrim Demiroz, Samuel Rupp Ogden, Tae Eun Choe, Sangmin Oh
-
Patent number: 12217477Abstract: In an object recognition method, an object recognition device obtains AER data of a to-be-recognized object, wherein the AER data includes a plurality of AER events of the to-be-recognized object, each AER event comprising a timestamp and address information. The object recognition device extracts a plurality of feature maps of the AER data. Each feature map including partial spatial information and partial temporal information of the to-be-recognized object, and the partial spatial information and the partial temporal information are obtained based on the timestamp and the address information of each AER event. The object recognition device then recognizes the to-be-recognized object based on the plurality of feature maps of the AER data.Type: GrantFiled: February 25, 2022Date of Patent: February 4, 2025Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Gang Pan, Qianhui Liu, Lei Jiang, Jie Cheng, Haibo Ruan, Dong Xing, Huajin Tang, De Ma
-
Patent number: 12190541Abstract: Disclosed herein are system, method, and computer program product embodiments for automated autonomous vehicle pose validation. An embodiment operates by generating a range image from a point cloud solution comprising a pose estimate for an autonomous vehicle. The embodiment queries the range image for predicted ranges and predicted class labels corresponding to lidar beams projected into the range image. The embodiment generates a vector of features from the range image. The embodiment compares a plurality of values to the vector of features using a binary classifier. The embodiment validates the autonomous vehicle pose based on the comparison of the plurality of values to the vector of features using the binary classifier.Type: GrantFiled: November 17, 2023Date of Patent: January 7, 2025Assignee: Volkswagen Group of America Investments, LLCInventors: Philippe Babin, Kunal Anil Desai, Tao V. Fu, Gang Pan, Xxx Xinjilefu
-
Patent number: 12190448Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated using a simulated environment. For example, a simulation may be run to simulate a virtual world or environment, render frames of virtual sensor data (e.g., images), and generate corresponding depth maps and segmentation masks (identifying a component of the simulated environment such as a road). To generate input training data, 3D structure estimation may be performed on a rendered frame to generate a representation of a 3D surface structure of the road. To generate corresponding ground truth training data, a corresponding depth map and segmentation mask may be used to generate a dense representation of the 3D surface structure.Type: GrantFiled: October 28, 2021Date of Patent: January 7, 2025Assignee: NVIDIA CorporationInventors: Kang Wang, Yue Wu, Minwoo Park, Gang Pan
-
Publication number: 20250001214Abstract: An apparatus for photon flash treatment, a method for photon beam formation, and a computer readable storage medium are provided. The apparatus includes an electron beam emitter, a deflection member, a deflection mechanism, and a radiation delivery device. The electron beam emitter is configured to emit an accelerated electron beam. The deflection member can perform first deflection processing on the accelerated electron beam. The deflection mechanism is configured to receive the electron beam after the first deflection processing, and perform second deflection processing on the electron beam after the first deflection processing. The radiation delivery device is configured to form a photon beam by receiving the electron beam after the first deflection processing or after the second deflection processing.Type: ApplicationFiled: July 1, 2024Publication date: January 2, 2025Inventors: Liuyuan ZHOU, Shoubo HE, Peng WANG, Gang PAN, Cheng NI
-
Publication number: 20250001212Abstract: A radiotherapy apparatus includes a shielding cabin and a radiation delivery device. The shielding cabin is configured to shield radiation and has a treatment site disposed therein. The radiation delivery device is movably housed within the shielding cabin, and configured to generate treatment radiation and direct the treatment radiation to a target position at the treatment site.Type: ApplicationFiled: July 1, 2024Publication date: January 2, 2025Inventors: Liuyuan Zhou, Shoubo He, Peng Wang, Gang Pan, Cheng Ni
-
Publication number: 20240428514Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated using a parametric mathematical modeling. A variety of synthetic 3D road surfaces may be generated by modeling a 3D road surface using varied parameters to simulate changes in road direction and lateral surface slope. In an example embodiment, a synthetic 3D road surface may be created by modeling a longitudinal 3D curve and expanding the longitudinal 3D curve to a 3D surface, and the resulting synthetic 3D surface may be sampled to form a synthetic ground truth projection image (e.g., a 2D height map). To generate corresponding input training data, a known pattern that represents which pixels may remain unobserved during 3D structure estimation may be generated and applied to a ground truth projection image to simulate a corresponding sparse projection image.Type: ApplicationFiled: January 17, 2024Publication date: December 26, 2024Inventors: Kang WANG, Yue WU, Minwoo PARK, Gang PAN
-
Patent number: 12172667Abstract: In various examples, a 3D surface structure such as the 3D surface structure of a road (3D road surface) may be observed and estimated to generate a 3D point cloud or other representation of the 3D surface structure. Since the estimated representation may be sparse, a deep neural network (DNN) may be used to predict values for a dense representation of the 3D surface structure from the sparse representation. For example, a sparse 3D point cloud may be projected to form a sparse projection image (e.g., a sparse 2D height map), which may be fed into the DNN to predict a dense projection image (e.g., a dense 2D height map). The predicted dense representation of the 3D surface structure may be provided to an autonomous vehicle drive stack to enable safe and comfortable planning and control of the autonomous vehicle.Type: GrantFiled: October 28, 2021Date of Patent: December 24, 2024Assignee: NVIDIA CorporationInventors: Kang Wang, Yue Wu, Minwoo Park, Gang Pan