Having Image Processing Patents (Class 701/28)
-
Patent number: 12291202Abstract: A lane keeping controller, a vehicle system including the same, and a method thereof are provided. The lane keeping controller includes a processor that monitors a risk level of a vehicle in real time, upon a lane keeping control, calculates a target lateral movement distance based on a line component, integrates an offset from a predetermined offset threshold to the vehicle, when an offset between a target route and the vehicle departs from the predetermined offset threshold, and corrects the target lateral movement distance based on the integrated value to calculate a final target lateral movement distance and a storage storing data and an algorithm run by the processor.Type: GrantFiled: December 8, 2021Date of Patent: May 6, 2025Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATIONInventor: Un Tae Baek
-
Patent number: 12293543Abstract: Techniques for performing computer vision operations for a vehicle using image data of an environment of the vehicle are described herein. In some cases, a vehicle (e.g., an autonomous vehicle) can determine a predicted position (including a predicted depth) and/or a predicted trajectory for an object in the vehicle environment based on data generated by projecting a map object described by the map data for a vehicle environment to image data of the vehicle environment.Type: GrantFiled: December 19, 2022Date of Patent: May 6, 2025Assignee: Zoox, Inc.Inventors: Xin Wang, Xinyu Xu
-
Patent number: 12281916Abstract: A method is provided automatically creating map geometry from data from various sources gathered within a geographical area using data aggregation and conflation with statistical analysis. Methods may include: receiving observation data associated with a geographic area; rasterizing objects within the observation data onto corresponding channels in a raster image corresponding to the geographic area having a given resolution; determining a distribution of locations of the objects parameterized by values in different channels in the raster image from the observation data; extracting analytic geometries from the raster image; generating map geometry based on the extracted analytic geometries; and updating a map database with the generated map geometry.Type: GrantFiled: May 5, 2022Date of Patent: April 22, 2025Assignee: HERE GLOBAL B.V.Inventor: Fei Tang
-
Patent number: 12280501Abstract: The robot teaching method includes a pre-registration step, robot operation step, and teaching step. The pre-registration step is for specifying a relative self-position of a measuring device with respect to surrounding environment by measuring the surrounding environment using the measuring device, and registering an environment teaching point that is a teaching point of the robot specified using the relative self-position. The robot operation step for automatically operating the robot so that the relative self-position of the robot with respect to the surrounding environment become equal to the environment teaching point in a state where the measuring device is attached to the robot. The teaching step for registering a detection value of a position and a posture of the robot measured by an internal sensor as teaching information in a state where the relative self-position of the robot with respect to the surrounding environment become equal to the environment teaching point.Type: GrantFiled: June 21, 2021Date of Patent: April 22, 2025Assignee: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Kazuki Kurashima, Hitoshi Hasunuma, Takeshi Yamamoto, Masaomi Iida, Tomomi Sano
-
Patent number: 12282982Abstract: Some embodiments of a method disclosed herein may include: receiving a predicted driving route, sensor ranges of sensors on an autonomous vehicle (AV), and sensor field-of-view (FOV) data; determining whether minimum sensor visibility requirements are met along the predicted driving route; predicting blind areas along the predicted driving route, wherein the predicted blind areas are determined to have potentially diminished sensor visibility; and displaying an augmented reality (AR) visualization of the blind areas using an AR display device.Type: GrantFiled: December 1, 2023Date of Patent: April 22, 2025Assignee: DRNC HOLDINGS, INC.Inventors: Jani Mantyjarvi, Jussi Ronkainen, Mikko Tarkiainen
-
Patent number: 12272154Abstract: Methods, systems, and apparatuses related to autonomous vehicle object detection are described. An autonomous vehicle can capture an image corresponding to an unknown object disposed within a sight line of the autonomous vehicle. Processing resources available to a plurality of memory devices associated with the autonomous vehicle can be reallocated in response to capturing the image and an operation involving the image corresponding to the unknown object to classify the unknown object can be performed using the reallocated processing resources.Type: GrantFiled: October 5, 2023Date of Patent: April 8, 2025Assignee: Micron Technology, Inc.Inventor: Reshmi Basu
-
Patent number: 12272141Abstract: According to the present invention, disclosed are a device and a method of generating an object image, recognizing an object, and learning an environment of a mobile robot which perform a deep learning algorithm which allows a robot to create a map and load environment information acquired during the autonomous movement while the autonomous mobile robot is being charged and may be used for an application which finds out a location by finally recognizing objects such as furniture using a method of checking a location of the recognized objects to mark the location on the map.Type: GrantFiled: January 17, 2022Date of Patent: April 8, 2025Assignees: YUJIN ROBOT CO., LTD., Miele & Cie. KGInventors: Seong Ju Park, Gi Yeon Park, Kyu Beom Lee
-
Patent number: 12269475Abstract: Examples of the present disclosure provide a computer-implemented system, comprising instructions for performing operations including: estimating locations in a region of a map where features associated with water, snow or ice are likely to be formed under certain adverse weather conditions; simulating one of the adverse weather conditions; generating the features associated with the simulated one of the adverse weather conditions at the locations; determining a response of a perception stack of an autonomous vehicle (AV) to the adverse weather conditions observed at the locations; determining a reaction of the AV to the features generated in the region, the reaction of the AV being a function of a configuration of the AV; in response to the reaction, updating the configuration; repeating the determining the reaction and the updating the configuration until a desired reaction is obtained; and exporting a final configuration corresponding to the desired reaction to a physical AV.Type: GrantFiled: November 4, 2022Date of Patent: April 8, 2025Assignee: GM Cruise Holdings LLCInventor: Noel Villegas
-
Patent number: 12266226Abstract: A vehicle onboard apparatus mounted in a vehicle includes an in-vehicle processing unit that performs image recognition on an input image from a camera mounted in the vehicle. The in-vehicle processing unit performs the image recognition using an image recognition model corresponding to position information of the vehicle and additional reference information different from the position information.Type: GrantFiled: March 9, 2022Date of Patent: April 1, 2025Assignee: DENSO TEN LimitedInventors: Aoi Ogishima, Yasutaka Okada, Ryusuke Seki, Yuki Katayama, Rei Hiromi
-
Patent number: 12263857Abstract: A method for assisting safer riving applied in a vehicle-mounted electronic device obtains RGB images of a scene in front of a vehicle when engine is running, processes the RGB images by a trained depth estimation model, obtains depth images and converts the depth images to three-dimensional (3D) point cloud maps. A curvature of the driving path of the vehicle is calculated, and 3D regions of interest of the vehicle are extracted from the 3D point cloud maps according to a size of the vehicle and the curvature or deviation from straight ahead. The 3D regions of interest are analyzed for presence of obstacles. When no obstacles are present, the vehicle is controlled to continue driving, when presence of at least one obstacle is determined, an alarm is issued.Type: GrantFiled: January 12, 2023Date of Patent: April 1, 2025Assignee: HON HAI PRECISION INDUSTRY CO., LTD.Inventors: Chih-Te Lu, Chieh Lee, Chin-Pin Kuo
-
Patent number: 12264902Abstract: The present disclosure is directed to mobile correctional facility robots and systems and methods for coordinating mobile correctional facility robots to perform various tasks in a correctional facility. The mobile correctional facility robots can be used to perform many of the tasks traditionally assigned to correctional facility guards to help reduce the number of guards needed in any given correctional facility. When cooperation is employed among multiple mobile correctional facility robots to execute tasks, a central controller can be used to coordinate the efforts of the multiple robots to improve the performance of the overall system of robots as compared to the performance of the robots when working in uncoordinated effort to execute the tasks.Type: GrantFiled: March 19, 2024Date of Patent: April 1, 2025Assignee: Global Tel*Link CorporationInventor: Stephen Hodge
-
Patent number: 12263597Abstract: A method of localizing a robot includes receiving odometry information plotting locations of the robot and sensor data of the environment about the robot. The method also includes obtaining a series of odometry information members, each including a respective odometry measurement at a respective time. The method also includes obtaining a series of sensor data members, each including a respective sensor measurement at the respective time. The method also includes, for each sensor data member of the series of sensor data members, (i) determining a localization of the robot at the respective time based on the respective sensor data, and (ii) determining an offset of the localization relative to the odometry measurement at the respective time. The method also includes determining whether a variance of the offsets determined for the localizations exceeds a threshold variance. When the variance among the offsets exceeds the threshold variance, a signal is generated.Type: GrantFiled: May 12, 2023Date of Patent: April 1, 2025Assignee: Boston Dynamics, Inc.Inventor: Matthew Jacob Klingensmith
-
Patent number: 12260719Abstract: Provided is a monitoring system configured such that, even when there is a deviation of a shooting region of a camera, if the deviation is within an acceptable level, an accurate vehicle detection using a recognition model can be performed without readjusting parameters or re-training the model. Based on an image of a no-entry zone and a platform zone, a processor performs a first operation for detecting a person in the no-entry zone and a second operation for detecting a vehicle in the no-entry zone; and based on detection results, determines whether an alert needs to be issued. When there is a deviation of the shooting region of a camera, the processor performs a conversion operation for converting an image of a current shooting region to an image that would be captured by the camera with an original shooting region and uses the converted image for the two detection operations.Type: GrantFiled: March 25, 2022Date of Patent: March 25, 2025Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Shin Yamada, Kazuhiko Iwai, Takeshi Watanabe
-
Patent number: 12253368Abstract: The present invention is a method of characterizing a route (T) travelled by a user. The method comprises a step of locating and timestamping measurement during the ride (MES), then determining a modified discrete Fréchet distance (DFM) between the measured route and previous routes. Finally, the modified discrete Fréchet distance (DFM) is used to characterize the route to be characterized (T).Type: GrantFiled: July 6, 2021Date of Patent: March 18, 2025Assignee: IFP ENERGIES NOUVELLESInventors: Francisco Jose Gonzalez De Cossio Echeverria, Guillaume Sabiron, Laurent Thibault
-
Patent number: 12246782Abstract: A container transportation system using autonomous driving includes: a container to be transported; an autonomous vehicle docked with or undocked from the container and autonomously travelling to transport the container to a transport destination; and a management server controlling travelling of the autonomous vehicle, wherein the autonomous vehicle comprises a container coupling part coupled to the container, and the container comprises: a container body; a plurality of height adjustment pillars coupled to respective corners of a lower surface of the container body and adjustable in length to lift or lower the container body from or to the ground; and a vehicle coupling part formed on the lower surface of the container body and coupled to the container coupling part.Type: GrantFiled: January 31, 2024Date of Patent: March 11, 2025Inventor: Henry Choi
-
Patent number: 12248058Abstract: The techniques and systems herein enable track association based on azimuth extension and compactness errors. Specifically, first and second tracks comprising respective locations and footprints of respective objects are received. An azimuth distance is determined based on an azimuth extension error that corresponds to azimuth spread between the first and second tracks with respect to a host vehicle. A position distance is also determined based on a compactness error that corresponds to footprint difference between the first and second tracks. Based on the azimuth and position distances, it is established whether the first object and the second object are a common object. By doing so, the system can better determine if the tracks are of the common object when the tracks are extended (e.g., not point targets) and/or partially observed (e.g., the track is not of an entire object).Type: GrantFiled: September 2, 2022Date of Patent: March 11, 2025Assignee: Aptiv Technologies AGInventors: Syed Asif Imran, Zixin Liu
-
Patent number: 12249161Abstract: A method for controlling an ego vehicle in an environment includes associating, by a velocity model, one or more objects within the environment with a respective velocity instance label. The method also includes selectively, by a recurrent network of the taillight recognition system, focusing on a selected region of the sequence of images according to a spatial attention model for a vehicle taillight recognition task. The method further includes concatenating the selected region with the respective velocity instance label of each object of the one or more objects within the environment to generate a concatenated region label. The method still further planning a trajectory of the ego vehicle based on inferring, at a classifier of the taillight recognition system, an intent of each object of the one or more objects according to a respective taillight state of each object, as determined based on the concatenated region label.Type: GrantFiled: April 28, 2022Date of Patent: March 11, 2025Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Kuan-Hui Lee, Charles Christopher Ochoa, Arjun Bhargava, Chao Fang
-
Patent number: 12246753Abstract: Various embodiments for systems and methods for cooperative driving of connected autonomous vehicles using responsibility-sensitive safety (RSS) rules are disclosed herein. The CAV system integrates proposed RSS rules with CAV's motion planning algorithm to enable cooperative driving of CAVs. The CAV system further integrates a deadlock detection and resolution system for resolving traffic deadlocks between CAVs. The CAV system reduces redundant calculation of dependency graphs.Type: GrantFiled: March 16, 2022Date of Patent: March 11, 2025Assignees: Arizona Board of Regents on Behalf of Arizona State University, National Taiwan UniversityInventors: Mohammad Khayatian, Mohammadreza Mehrabian, Harshith Allamsetti, Kai-Wei Liu, Po-Yu Huang, Chung-Wei Lin, Aviral Shrivastava
-
Patent number: 12242274Abstract: A system and method for real world autonomous vehicle trajectory simulation may include: receiving training data from a data collection system; obtaining ground truth data corresponding to the training data; performing a training phase to train a plurality of trajectory prediction models; and performing a simulation or operational phase to generate a vicinal scenario for each simulated vehicle in an iteration of a simulation. Vicinal scenarios may correspond to different locations, traffic patterns, or environmental conditions being simulated. Vehicle intention data corresponding to a data representation of various types of simulated vehicle or driver intentions.Type: GrantFiled: December 12, 2023Date of Patent: March 4, 2025Assignee: TUSIMPLE, INC.Inventors: Xing Sun, Wutu Lin, Liu Liu, Kai-Chieh Ma, Zijie Xuan, Yufei Zhao
-
Patent number: 12240470Abstract: Provided is a method for driving behavior modeling based on spatio-temporal information fusion, relating to the field of driving behavior simulations. The method includes: constructing a driving behavior model, where the driving behavior model includes a spatial information encoding network, a temporal information encoding network, a feature fusion network, and a feature decoding network, with the feature fusion network being connected to both the spatial information encoding network and the temporal information encoding network, and the feature decoding network being connected to the feature fusion network; determining a future trajectory sequence of a target main vehicle at future time points based on the trained driving behavior model according to spatial information and temporal information of the target main vehicle, where the target main vehicle is controlled to travel according to the future trajectory sequence at the future time points.Type: GrantFiled: October 2, 2024Date of Patent: March 4, 2025Assignee: Jilin UniversityInventors: Hong Chen, Huihua Gao, Ting Qu, Yunfeng Hu, Xun Gong
-
Patent number: 12242286Abstract: Provided are a method, system and device for global path planning for an unmanned vehicle in an off-road environment. The method includes: obtaining satellite elevation data and a satellite remote sensing image of a current off-road environment; constructing a digital elevation model (DEM); determining slope and land surface relief of each grid in the current off-road environment; performing gray processing on the satellite remote sensing image to obtain grayscale values of the grids; determining traversal costs of the grids corresponding to different ground types; constructing a global grid map based on the slope and the land surface relief of each grid, as well as the traversal costs corresponding to the different ground types; determining a rugged terrain potential field and path costs; and searching for paths using a Bresenham's line algorithm and Theta* algorithm based on the rugged terrain potential field and the path costs, to generate a global path.Type: GrantFiled: August 19, 2024Date of Patent: March 4, 2025Assignee: Beijing Institute of TechnologyInventors: Shida Nie, Yujia Xie, Zhihao Liao, Hui Liu, Lijin Han, Congshuai Guo
-
Patent number: 12235656Abstract: A method for autonomously operating a following vehicle in a vehicle train together with a vehicle driving ahead of the following vehicle with respect to a direction of travel includes acquiring first route information with a front-mounted sensor. The method also includes operating the following vehicle on the basis of first route information and acquiring second route information with a rear-mounted sensor. The method further includes transmitting the second route information to the following vehicle and, when a substitute criterion is present, operating the following vehicle on the basis of the second route information.Type: GrantFiled: November 15, 2020Date of Patent: February 25, 2025Assignee: Conti Temic microelectronic GmbHInventor: Günter Anton Fendt
-
Patent number: 12235093Abstract: Disclosed is a method for designing a packaging plant, wherein a measuring vehicle is moved within an area in which the production plant is to be erected or modified, and a position of the measuring vehicle relative to the area is detected. The measuring vehicle is positioned at a plurality of positions within this area and at this position records respective images and/or data of the area with a first image capturing device, wherein at least one geometric property of the area and/or the packaging plant is detected.Type: GrantFiled: August 30, 2018Date of Patent: February 25, 2025Assignee: KRONES AGInventors: Georg Gertlowski, Tobias Schweiger
-
Patent number: 12223410Abstract: To select a lane in a multi-lane road segment for a vehicle travelling on the road segment, a system identifies, in multiple lanes and in a region ahead of the vehicle, another vehicle defining a target; the system applies an optical flow technique to track the target during a period of time, to generate an estimate of how fast traffic moves; and the system applies the estimate to machine learning (ML) model to generate a recommendation which one of the plurality of lanes the vehicle is to choose.Type: GrantFiled: February 27, 2024Date of Patent: February 11, 2025Assignee: GOOGLE LLCInventors: Thomas Deselaers, Victor Carbune
-
Patent number: 12217437Abstract: An obstacle detecting device includes: an image converting portion for converting, into a circular cylindrical image, an image captured by a camera installed on a vehicle; a detection subject candidate image detecting portion for detecting a detection subject candidate image through pattern matching; an optical flow calculating portion for calculating an optical flow; an outlier removing portion for removing an optical flow that is not a detection subject; a TTC calculating portion for calculating a TTC (TTCX, TTCY); a tracking portion for generating a region of the detection subject on the circular cylindrical image by tracking the detection subject candidate; and a collision evaluating portion for evaluating whether or not there is the risk of a collision, wherein the optical flow calculating portion calculates the optical flow based on the detection subject candidate image and the region.Type: GrantFiled: August 17, 2020Date of Patent: February 4, 2025Assignee: FAURECIA CLARION ELECTRONICS CO., LTD.Inventors: Akira Ohashi, Daisuke Fukuda
-
Patent number: 12210947Abstract: The technology relates to using on-board sensor data, off-board information and a deep learning model to classify road wetness and/or to perform a regression analysis on road wetness based on a set of input information. Such information includes on-board and/or off-board signals obtained from one or more sources including on-board perception sensors, other on-board modules, external weather measurement, external weather services, etc. The ground truth includes measurements of water film thickness and/or ice coverage on road surfaces. The ground truth, on-board and off-board signals are used to build the model. The constructed model can be deployed in autonomous vehicles for classifying/regressing the road wetness with on-board and/or off-board signals as the input, without referring to the ground truth.Type: GrantFiled: August 28, 2023Date of Patent: January 28, 2025Assignee: Waymo LLCInventors: Xin Zhou, Roshni Cooper, Michael James
-
Patent number: 12211288Abstract: Provided are methods for managing traffic light detections, which can include: deriving a first state of a traffic light at an intersection a vehicle is approaching, according to first detection data acquired by a first traffic light detection (TLD) system; deriving a second state of the traffic light at the intersection, according to second detection data acquired by a second TLD system that is independent from the first TLD system; determining traffic light information at the intersection based on at least one of (i) the first state or (ii) a result of checking whether the first state is same as the second state; and causing the vehicle to operate in accordance with the determined traffic light information at the intersection. Systems and computer program products are also provided.Type: GrantFiled: July 28, 2022Date of Patent: January 28, 2025Assignee: Motional AD LLCInventor: Chong Meng Wong
-
Patent number: 12208794Abstract: A vehicle control apparatus determines whether or not a confirmed abnormal state in which it can be confirmed that a driver of a vehicle has fallen into an abnormal state where he/she is unable to drive the vehicle is occurring. When it is determined that the confirmed abnormal state is occurring, the apparatus decelerates the vehicle at a normal deceleration DGnor. However, when it is predicated that a control limit of a lane keeping control will come before the vehicle stops in the case where the vehicle is decelerated at the normal deceleration DGnor from when the confirmed abnormal state is determined to be occurring, the apparatus decelerates the vehicle at a maximum deceleration DGmax greater than the normal deceleration.Type: GrantFiled: February 24, 2023Date of Patent: January 28, 2025Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventor: Yusuke Tanaka
-
Patent number: 12206949Abstract: The system described herein implements synchronized playback of data related to an event. The system receives information that defines a type of event, as well as a time at which the event occurs. Moreover, the information defines an environment in which the event occurs. The system maps the type of event to data sources associated with the environment. Furthermore, the system maps the time at which the event occurs to a predefined timeframe that precedes and/or overlaps with the time at which the event occurs. The system retrieves respective datasets from the data sources and generates respective visualizations for the datasets. The system displays the visualizations in a layout and provides user controls that enable a user to implement synchronized playback of the event from the perspective of the data sources. For example, the synchronized playback can cycle through a sequence of data display states for the datasets.Type: GrantFiled: April 3, 2023Date of Patent: January 21, 2025Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Anja Liliane Ziegler, Kristopher Colvin Borchers, Umesh Kumar, Samuel Ketsela Zeleke, Manuel Rodriguez Vazquez
-
Patent number: 12205340Abstract: A neural network configured for classifying whether an image from an optical sensor characterizes an obstruction of the optical sensor or not. The classification is characterized by an output of the neural network for an input of the neural network and wherein the input is based on the image. The neural network comprises a first convolutional layer that characterizes a 1D-convolution along a vertical axis of a convolution output of a preceding convolutional layer and a second convolutional layer that characterizes a 1D-convolution along a horizontal axis of the convolution output. The output of the neural network is based on a first convolution output of the first convolutional layer and based on a second convolution output of the second convolutional layer.Type: GrantFiled: July 7, 2022Date of Patent: January 21, 2025Assignee: ROBERT BOSCH GMBHInventors: Marcus Schmitt, Magnus Ernst Daum, Sebastian Johann Hermann Konopka, Thomas Hellmuth, Ulrich Stopper
-
Patent number: 12205381Abstract: A vehicular control system includes a camera, a radar sensor and a processor operable to process image data captured by the camera and radar data captured by the radar sensor. The system determines that it is not safe to proceed with an intended turn at an intersection responsive to determination that the intended turn at the intersection will not be completed before an estimated time to arrival of an approaching vehicle at the intersection elapses and/or determination that a pedestrian is present at the intersection where the equipped vehicle will turn. The system determines that it is safe to proceed with the intended turn at the intersection responsive at least in part to determination that the intended turn at the intersection will be completed before the estimated time to arrival elapses and determination that no pedestrian is present at the intersection where the equipped vehicle will turn.Type: GrantFiled: April 29, 2024Date of Patent: January 21, 2025Assignee: MAGNA ELECTRONICS INC.Inventors: Rohan J. Divekar, Paul A. VanOphem
-
Patent number: 12194988Abstract: Systems and methods of controlling an active safety feature of a vehicle are provided. The systems and methods receive radar data from a radar device of the vehicle and image data from a camera of the vehicle. Object detection and tracking processes are performed on the radar data and the image data to identify and track objects in an environment of the vehicle. Conditions are assessed with respect to identified objects to ascertain whether a radar track is erroneously reported as a separate object to a camera track. When the conditions are assessed to be true, an object corresponding to the camera track is used as an input for controlling an active safety feature of the vehicle and an object corresponding to the radar track is discounted for controlling the active safety feature of the vehicle.Type: GrantFiled: April 1, 2022Date of Patent: January 14, 2025Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Amanpal S Grewal, Michael Trotter, Ibrahim Riba
-
Patent number: 12189039Abstract: A system is capable of three-dimensional scanning in an environment containing a protrusion that extends substantially parallel to a floor. The system includes an autonomous vehicle, a planar distance sensor, and a computing device. The autonomous vehicle is configured to be located on the floor and to move across the floor. The planar distance sensor is fixedly mounted to the autonomous vehicle. A field of the planar distance sensor is at a non-parallel angle with respect to the floor when the autonomous vehicle is on the floor. The field of the planar distance sensor impinges on a surface of the protrusion when the autonomous vehicle is on the floor. The computing device is located on the autonomous vehicle and configured to develop a three-dimensional scan of the protrusion as the autonomous vehicle moves across the floor.Type: GrantFiled: January 12, 2018Date of Patent: January 7, 2025Assignee: DIVERSEY, INC.Inventor: Aurle Gagne
-
Patent number: 12190579Abstract: Provided are an apparatus and a method that performs selection or display mode change of a virtual object to be displayed depending on a real object type in a target area to be a display area for the virtual object. Included are an object identification unit that executes identification processing of a real object in the real world, and a content display control unit that generates an AR image in which a real object and a virtual object are superimposed and displayed. The object identification unit identifies a real object in the target area to be the display area for the virtual object, and the content display control unit performs processing of selecting the virtual object to be displayed or processing of changing the display mode depending on the object identification result.Type: GrantFiled: July 7, 2020Date of Patent: January 7, 2025Assignee: SONY GROUP CORPORATIONInventors: Tomohiko Gotoh, Hidenori Aoki, Fujio Arai, Keijiroh Nagano, Ryo Fukazawa, Haruka Fujisawa
-
Patent number: 12181288Abstract: A method for securing a geographic position of a vehicle includes scanning multiple information sources, which each provide items of information which indicate the position of the vehicle; determining individual positions on the basis of each of items of information of one of the information sources and items of surroundings information in the region of a position hypothesis; validating items of information, the associated individual position of which deviates by not more than a predetermined amount from the position hypothesis; determining the position of the vehicle on the basis of the validated items of information, wherein a change of a quality of the items of surroundings information is determined; and, on the basis of the change, determining a probability at which the determined position deviates by more than the predetermined amount from an actual position.Type: GrantFiled: December 1, 2021Date of Patent: December 31, 2024Assignee: Bayerische Motoren Werke AktiengesellschaftInventors: Alexander Lottes, Pascal Minnerup, Bernd Spanfelner
-
Patent number: 12172301Abstract: There is provided an electronic product that performs the obstacle avoidance, positioning and object recognition according to image frames captured by the same optical sensor. The electronic product includes an optical sensor, a light emitting diode, a laser diode and a processor. The processor identifies an obstacle and a distance thereof according to image frames captured by the optical sensor when the laser diode is emitting light. The processor further performs the positioning and object recognition according to image frames captured by the optical sensor when the light emitting diode is emitting light.Type: GrantFiled: May 17, 2023Date of Patent: December 24, 2024Inventors: Guo-Zhen Wang, Hui-Hsuan Chen
-
Patent number: 12175695Abstract: Disclosed are methods, devices, and computer-readable media for detecting lanes and objects in image frames of a monocular camera. In one embodiment, a method is disclosed comprising receiving a sample set of image frames; detecting a plurality of markers in the sample set of image frames using a convolutional neural network (CNN); fitting lines based on the plurality of markers; detecting a plurality of vanishing points based on the lines; identifying a best fitting horizon for the sample set of image frames via a RANSAC algorithm; computing an inverse perspective mapping (IPM) based on the best fitting horizon; and computing a lane width estimate based on the sample set of image frames using the IPM in a rectified view and the parallel line fitting.Type: GrantFiled: September 20, 2023Date of Patent: December 24, 2024Assignee: MOTIVE TECHNOLOGIES, INC.Inventors: Aamer Zaheer, Ali Hassan, Ahmed Ali, Hussam Ullah Khan, Afsheen Rafaqat Ali, Syed Wajahat Ali Shah Kazmi
-
Patent number: 12169942Abstract: A method for training an image depth estimation model. A sample environmental image, sample environmental point cloud data and sample edge information of the sample environmental image are input into a to-be-trained model; initial depth information of each of pixel points in the sample environmental image and a feature relationship between each of the pixel points and a corresponding neighboring pixel point of each of the pixel points are determined through the to-be-trained model, the initial depth information of each of the pixel points is optimized according to the feature relationship to obtain optimized depth information of each of the pixel points, and a parameter of the to-be-trained model is adjusted according to the optimized depth information to obtain the image depth estimation model.Type: GrantFiled: May 19, 2021Date of Patent: December 17, 2024Inventors: Minyue Jiang, Xipeng Yang, Xiao Tan, Hao Sun
-
Patent number: 12168438Abstract: Systems and methods for generating a trajectory of a vehicle are disclosed. The methods include generating a spatial domain speed profile for a path represented as a sequence of poses of a vehicle between a first point and a second point. Generation of the spatial domain speed profile uses a longitudinal problem while excluding a derate interval on the path when speed of the vehicle cannot exceed a threshold. The methods further include generating a derate profile using the spatial domain speed profile and the derate interval, and transforming the derate interval to a temporal derate interval. The temporal derate interval may include time steps at which the speed of the vehicle cannot exceed the threshold while traversing the path, and may be used as an input to the longitudinal problem for generating a trajectory of the vehicle for navigating between the first point and the second point.Type: GrantFiled: October 11, 2021Date of Patent: December 17, 2024Assignee: Ford Global Technologies, LLCInventors: Alice Kassar, Scott Julian Varnhagen, Ramadev Burigsay Hukkeri
-
Patent number: 12169257Abstract: A process for calibrating a distance and range measurement device coupled to an industrial vehicle comprises taking a first measurement of an emission from the device at a first yaw angle relative to a roll axis of the device. A second measurement of the emission at a second yaw angle relative to the roll axis is taken. The second yaw angle is within an angular tolerance of the first yaw angle but in an opposite direction. The device is calibrated relative to the roll axis when the first and second measurements are within a tolerance of each other.Type: GrantFiled: September 29, 2023Date of Patent: December 17, 2024Assignee: Crown Equipment CorporationInventor: Sebastian Theos
-
Patent number: 12164303Abstract: An electronic device having a moving part is provided. The electronic device includes a moving part; a light emitting element; an optical sensor; a memory; a communication interface; and a controller configured to: based on the electronic device moving and being in a recording mode, output light from the light emitting element; receive the light reflected from a ground by the optical sensor; acquire information about a moving path of the electronic device based on the received light and store the information in the memory; and based on receiving a control signal for moving the electronic device along the moving path from a user terminal device through the communication interface, set an operation mode of the electronic device as a travel mode, and control the moving part so that the electronic device moves along the moving path.Type: GrantFiled: March 21, 2022Date of Patent: December 10, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Heesae Lee, Seungyeon Choe
-
Patent number: 12157499Abstract: An assistance method of safe driving applied in a vehicle-mounted electronic device obtains RGB images of scene in front of a vehicle, processes the RGB images by a trained depth estimation model, obtains depth images and converts the depth images into three-dimensional (3D) point cloud maps, determines 3D regions of interest therein, and obtains position and size information of objects in the 3D regions of interest. When the position information satisfies a first preset condition and/or the size information satisfies a second preset condition, the presence of obstacles in the 3D regions of interest is determined and controls the vehicle to issue an alarm. When the position information does not satisfy the first preset condition and/or the size information does not satisfy the second preset condition, the 3D regions of interest are determined as obstacle-free, and permitting the vehicle to continue driving.Type: GrantFiled: August 26, 2022Date of Patent: December 3, 2024Assignee: HON HAI PRECISION INDUSTRY CO., LTD.Inventors: Shih-Chao Chien, Chin-Pin Kuo, Chieh Lee
-
Patent number: 12154328Abstract: The present invention relates to a system for steering or guiding an agriculture harvesting machine (agriculture harvester) with a high degree of precision, without the need for the agriculture harvesting machine to have a guide stick or shoe in physical or mechanical contact with the agriculture to be harvested or to be in connection with remote navigation systems, such as GPS. The system includes a sensor mounted at the front of the harvester and a processor for processing information from the sensor to determine a boundary line and for steering the sod harvester along the boundary line.Type: GrantFiled: June 20, 2022Date of Patent: November 26, 2024Inventor: David R. Wilmot
-
Patent number: 12154346Abstract: Methods and systems are provided for detecting objects by utilizing uncertainties. In some aspects, a process can include steps for receiving, by an autonomous vehicle system, a frame of a scene with a detected object; estimating, by the autonomous vehicle system, an overall probability of the detected object in the frame; estimating, by the autonomous vehicle system, covariances for each state of at least one bounding box; and balancing, by the autonomous vehicle system, confidence values of the at least one bounding box based on the overall probability of the detected object and the covariances of each state of the at least one bounding box.Type: GrantFiled: December 17, 2021Date of Patent: November 26, 2024Assignee: GM CRUISE HOLDINGS LLCInventors: Pranay Agrawal, Yong Jae Lee, Chiyu Jiang
-
Patent number: 12153428Abstract: A autonomous robotic golf caddy which is capable of following a portable receiver at a pre-determined distance, and which is capable of sensing a potential impending collision with an object in its path and stop prior to said potential impending collision.Type: GrantFiled: August 3, 2023Date of Patent: November 26, 2024Assignee: Lemmings, LLCInventors: Dennis W. Doane, Rick M. Doane, Timothy L. Doane, Shea P. Doane, Robert T. Nicola, Kenneth M. Burns
-
Low resolution traffic light candidate identification followed by high resolution candidate analysis
Patent number: 12146757Abstract: Systems and methods are provided for vehicle navigation. The systems and methods may detect traffic lights. For example, one or more traffic lights may be detected using detection-redundant camera detection paths, a fusion of information from a traffic light transmitter and one or more cameras, based on contrast enhancement for night images, and based on low resolution traffic light candidate identification followed by high resolution candidate analysis. Additionally, the systems and methods may navigation based on a worst time to red estimation.Type: GrantFiled: November 18, 2021Date of Patent: November 19, 2024Assignee: MOBILEYE VISION TECHNOLOGIES LTD.Inventors: Yoav Taieb, Yuval Hochman, Chagay Ki-Tov -
Patent number: 12136270Abstract: A vehicular trailer assist system includes a rearward viewing camera disposed at a vehicle that views a trailer hitched at a fifth wheel hitch at a bed of the vehicle. With the trailer hitched to the fifth wheel hitch at the bed of the vehicle, the vehicular trailer assist system, via processing fisheye-view frames of image data captured by the camera, transforms fisheye-view frames of image data captured by the rearward viewing camera into bird's-eye view frames of image data. The vehicular trailer assist system determines a region of interest (ROI) in a transformed bird's-eye view frames of image data that includes a region where the fifth wheel hitch is present. The vehicular trailer assist system, via a Hough transform that transforms the determined ROI from a Cartesian coordinate system to a polar coordinate system, determines a trailer angle of the trailer relative to the vehicle.Type: GrantFiled: January 4, 2022Date of Patent: November 5, 2024Assignee: Magna Electronics Inc.Inventors: Jyothi P. Gali, Harold E. Joseph, Gajanan Subhash Kuchgave, Alexander Velichko, Guruprasad Mani Iyer Shankaranarayanan
-
Patent number: 12134521Abstract: An automated in-rack picking solution enables improved efficiency by permitting automatic reconfiguration of automated picking system (APS) deployments within an automated storage and retrieval system (ASRS). The storage volume of an ASRS can be more thoroughly utilized, even with a smaller number of APSs, when at least one APS is operable to autonomously relocate within the ASRS based at least on a stored item's location and/or property (e.g., suitability for handling by a particular end effector). An exemplary solution includes an APS positioned to reach stored items within a first subset of storage locations when affixed to a first attachment point; a transport component operable to relocate the APS to a second attachment point, wherein the APS is positioned to reach stored items within a second subset of the storage locations when affixed to the second attachment point; and a controller operable to instruct relocation of the APS.Type: GrantFiled: November 30, 2022Date of Patent: November 5, 2024Assignee: Walmart Apollo, LLCInventors: Brian C. Roth, Paul Durkee, Ben Edwards
-
Patent number: 12131440Abstract: The present disclosure relates to a method and apparatus for generating training data of a deep learning model for lane classification. The method according to an embodiment of the present disclosure is performed by an electronic apparatus, and is a method for generating training data of a deep learning model for lane classification by generating a composite image of the other color lane using images of a white lane and the other color lane, and includes determining a ratio of other two channels based on one channel (reference color channel) for three color channels of red (R), green (G) and blue (B) of the other color lane in the image of the other color lane; and generating a composite image of the other color lane by scaling the image of the white lane by applying the determined ratio to the other two channels with respect to the reference color channel of the white lane.Type: GrantFiled: April 14, 2022Date of Patent: October 29, 2024Assignee: HL Klemove Corp.Inventors: S Vinuchackravarthy, Shubham Jain, Arpit Awasthi, Jitesh Kumar Singh
-
Patent number: 12124269Abstract: Systems and methods for the simultaneous localization and mapping of autonomous vehicle systems are provided. A method includes receiving a plurality of input image frames from the plurality of asynchronous image devices triggered at different times to capture the plurality of input image frames. The method includes identifying reference image frame(s) corresponding to a respective input image frame by matching the field of view of the respective input image frame to the fields of view of the reference image frame(s). The method includes determining association(s) between the respective input image frame and three-dimensional map point(s) based on a comparison of the respective input image frame to the one or more reference image frames. The method includes generating an estimated pose for the autonomous vehicle the one or more three-dimensional map points. The method includes updating a continuous-time motion model of the autonomous vehicle based on the estimated pose.Type: GrantFiled: November 1, 2021Date of Patent: October 22, 2024Assignee: AURORA OPERATIONS, INC.Inventors: Anqi Joyce Yang, Can Cui, Ioan Andrei Bârsan, Shenlong Wang, Raquel Urtasun