Picture Signal Generator Patents (Class 348/46)
-
Patent number: 12257110Abstract: A system to build a three-dimensional model of a user's teeth and adjoining tissues which comprises at least one intraoral device to be arranged in the oral cavity of the user; at least one set of cameras arranged in the intraoral device to capture at least one stereo image of the oral cavity of the at least one user; and at least one processing medium which receives the at least one stereo image; wherein at least the one processing medium comprises at least one trained neural network which analyzes the at least one stereo image to estimate at least one depth map; and wherein the at least one processing medium also comprises at least one block of location and mapping which sequentially integrates the at least one stereo image and the at least one depth map to the generated three-dimensional model.Type: GrantFiled: October 18, 2024Date of Patent: March 25, 2025Inventors: Gerard Andre Philip Liberman Paz, Javier Ignacio Liberman Salazar, Felipe Ignacio Pesce Bentjerodt, Carlos Julio Santander Guerra, Cristobal Gaspar Ignacio Pizarro Venegas, Diego Facundo Lazcano Arcos, Andres Garabed Baloian Gacitua, David Caro Benado
-
Patent number: 12256051Abstract: The present technology relates to an image processing apparatus and method and a program that enable natural representation of light in an image in accordance with a viewpoint. The image processing apparatus calculates information indicating a change in a light source region between an input image and a viewpoint-converted image that is obtained by performing viewpoint conversion on the input image on the basis of a specified viewpoint, and causes a change in representation of light in the viewpoint-converted image on the basis of the calculated information indicating a change in the light source region. The present technology can be applied to an image display system that generates a pseudo stereoscopic image with motion parallax from one image.Type: GrantFiled: March 31, 2021Date of Patent: March 18, 2025Assignee: SONY GROUP CORPORATIONInventor: Kenichiro Hosokawa
-
Patent number: 12236629Abstract: A request signal that indicates a quality for a determination of an orientation of a road user is received. The orientation of the road user is determined based on a) image data when the request signal indicates the quality is below a predetermined quality for the determination of the orientation of the road user, or b) on LIDAR data and image data when the request signal indicates the quality is the predetermined quality.Type: GrantFiled: September 29, 2020Date of Patent: February 25, 2025Assignee: Ford Global Technologies, LLCInventors: Turgay Isik Aslandere, Evangelos Bitsanis, Michael Marbaix, Alain Marie Roger Chevalier, Frederic Stefan
-
Patent number: 12229880Abstract: The disclosure provides a method for generating relightable 3D portrait using a deep neural network and a computing device implementing the method. A possibility of obtaining, in real time and on computing devices having limited processing resources, realistically relighted 3D portraits having quality higher or at least comparable to quality achieved by prior art solutions, but without utilizing complex and costly equipment is provided.Type: GrantFiled: November 20, 2023Date of Patent: February 18, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Artem Mikhailovich Sevastopolskiy, Victor Sergeevich Lempitsky
-
Patent number: 12229994Abstract: Disclosed are various embodiments for evaluating performance metrics (e.g., accuracy, depth precision, curvature accuracy, coverage, data acquisition time, etc.) of sensors (e.g., cameras, depth cameras, color cameras, etc.) according to captured image data. One or more reference boards having different types of reference structures (e.g., three-dimensional shapes, materials, three-dimensional patterns (e.g., waves, steps, etc.), gaps, etc.) that are used to evaluate the performance properties of a sensor. A reference board is attached to a robotic arm and positioned in front of a sensor. The robotic arm positions the reference board in front the sensor in different viewpoints while the sensor captures image data associated with the reference board. The captured image data is compared with ground truth data associated with the reference board to determine performance metrics of the sensor.Type: GrantFiled: August 23, 2021Date of Patent: February 18, 2025Assignee: Amazon Technologies, Inc.Inventor: Johannes Kulick
-
Patent number: 12225066Abstract: Method, device, and non-transitory storage medium for adaptive streaming of immersive media are provided. The method may include determining characteristics associated with a scene to be transmitted to the end client, adjusting at least a part of the scene to be transmitted to the end client based on the determined characteristics, and transmitting an adaptive stream of the lightfield or holographic immersive media comprising the adjusted scene based on the determined characteristics.Type: GrantFiled: October 21, 2022Date of Patent: February 11, 2025Assignee: TENCENT AMERICA LLCInventors: Rohit Abhishek, Arianne Hinds
-
Patent number: 12222423Abstract: A method of measuring a distance between a vehicle and one or more objects, includes generating a modulation signal; generating a modulated light emitting diode (LED) transmission signal, via a vehicle LED driver assembly; transmitting a plurality of light beams based at least in part on the generated modulated LED transmission signal; capturing a reflection of the plurality of light beams off the one or more objects, utilizing one or more lens assemblies and a camera, the camera including an array of pixel sensors and being positioned on the vehicle; communicating a series of measurements representing the captured plurality of light beam reflections; calculating, utilizing the time-of-flight sensor module, time of flight measurements between the vehicle LED light assembly and the one or more objects and calculating distances, utilizing a depth processor module, between the vehicle LED light assembly and the one or more objects based on the time-of-flight measurements.Type: GrantFiled: August 19, 2024Date of Patent: February 11, 2025Assignee: Wireless Photonics, LLCInventors: Bahram Jalali, Alexandra Jalali, Mehdi Hatamian, Ahmadreza Rofougaran
-
Patent number: 12222455Abstract: A time-of-flight apparatus has: a light source for emitting light to a scene; a light detector for detecting light from the scene; and a control, the control being configured to: acquire a frame of detected light from the light detector, wherein the frame corresponds to a predetermined time interval, and drive the light source for emitting light during the acquisition of the frame, wherein the light energy accumulated within the frame has a predetermined value, and wherein the frame is divided into active light time intervals during which light is emitted to the scene.Type: GrantFiled: September 17, 2019Date of Patent: February 11, 2025Assignee: Sony Semiconductor Solutions CorporationInventors: Manuel Amaya-Benitez, Ward Van Der Tempel, Kaji Nobuaki, Hiroaki Nishimori
-
Patent number: 12223602Abstract: Augmented reality (AR) systems, devices, media, and methods are described for creating a handcrafted AR experience. The handcrafted AR experiences are created by capturing images of a scene, identifying an object receiving surface and corresponding surface coordinates, identifying a customizable AR primary object associated with at least one set of primary object coordinates, generating AR overlays including the customizable AR primary object for positioning adjacent the object receiving surface, presenting the AR overlays, receiving customization commands, generating handcrafted AR overlays including customizations associated with the customizable AR primary object responsive to the customization commands, presenting the handcrafted AR overlays, recording the handcrafted AR overlays, creating a handcrafted AR file including the recorded overlays, and transmitting the handcrafted AR file.Type: GrantFiled: August 15, 2022Date of Patent: February 11, 2025Assignee: Snap Inc.Inventors: Tianying Chen, Timothy Chong, Sven Kratz, Fannie Liu, Andrés Monroy-Hernández, Olivia Seow, Yu Jiang Tham, Rajan Vaish, Lei Zhang
-
Patent number: 12213836Abstract: Systems and methods for preprocessing three dimensional (3D) data prior to generating inverted renders are disclosed herein. The preprocessing may include segmenting the 3D data to remove portions of the data associated with noise such that those portions do not appear in the generated render. The segmentation may include applying a mask to the 3D data. The mask may be generated by sorting data points in the 3D data set into a first set or a second set. In some examples, the 3D data may be filtered prior to generating the mask. In some examples, the mask may be adjusted based on feature recognition. The preprocessing may allow the visualization of hypoechoic regions of interest in a volume.Type: GrantFiled: February 21, 2020Date of Patent: February 4, 2025Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Paul Sheeran, Thanasis Loupas, Charles Tremblay-Darveau
-
Patent number: 12212732Abstract: There is provided an image processing apparatus comprising. An obtainment unit obtains a first circular fisheye image accompanied by a first missing region in which no pixel value is present. A generation unit generates a first equidistant cylindrical projection image by performing first equidistant cylindrical transformation processing based on the first circular fisheye image. The generation unit generates the first equidistant cylindrical projection image such that a first corresponding region corresponding to the first missing region has a pixel value in the first equidistant cylindrical projection image.Type: GrantFiled: May 27, 2022Date of Patent: January 28, 2025Assignee: Canon Kabushiki KaishaInventors: Toru Kobayashi, Toshitaka Aiba, Yuki Tsuihiji
-
Patent number: 12210145Abstract: A single-particle localization microscope, including an optical system configured to illuminate a sample region with a sequence of light patterns having spatially different distributions of illumination light adapted to cause a single particle located in the sample region to emit detection light, a detector configured to detect a sequence of intensities of the detection light emerging from the sample region in response to the sequence of illuminating light patterns, and a processor configured to determine, based on the sequence of intensities of the detection light, an arrangement of potential positions for locating the particle. The processor further illuminates the sample region with at least one subsequent light pattern, causes detection of at least one subsequent intensity, and decides, based on the at least one subsequent intensity of the detection light, which one of the multiple potential positions represents an actual position of the particle in the sample region.Type: GrantFiled: January 16, 2023Date of Patent: January 28, 2025Assignee: LEICA MICROSYSTEMS CMS GMBHInventor: Marcus Dyba
-
Patent number: 12212823Abstract: Auto exposure processing for spherical images improves image quality by reducing visible exposure level variation along a stitch line within a spherical image. An average global luminance value is determined based on luminance values determined for first and second images, which are based on auto exposure configurations of first and second image sensors used to obtain those first and second images. Delta luminance values are determined for the first and second images using the average global luminance value. The first and second images are then updated using the delta luminance values, and the updated first and second images are used to produce a spherical image.Type: GrantFiled: November 9, 2023Date of Patent: January 28, 2025Assignee: GoPro, Inc.Inventors: Guillaume Matthieu Guérin, Sylvain Leroy, Yoël Taïeb, Giuseppe Moschetti
-
Patent number: 12205296Abstract: The implementation of the present disclosure provides a point cloud partitioning method and device, and a computer-readable storage medium, including: when performing stripe division along the longest side, adjusting an initial partitioning position or determining the stripe division length according to the size of a preset block, to obtain a stripe division position, wherein, the length of the first n?1 stripes along the longest side is an integer multiple of the side length of the preset block, n is the number of divided stripes, and n is a positive integer greater than or equal to 2.Type: GrantFiled: November 3, 2023Date of Patent: January 21, 2025Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventors: Wei Zhang, Fuzheng Yang, Shuai Wan, Yanzhuo Ma, Junyan Huo, Sujun Zhang, Zexing Sun
-
Patent number: 12205219Abstract: A system, method or compute program product for generating stereoscopic images. One of the methods includes identifying, in a first three-dimensional coordinate system of a first three-dimensional virtual environment, a location and orientation of a first virtual object that is a virtual stereoscopic display object; identifying an eyepoint pair in the first virtual environment; identifying, in a second three-dimensional coordinate system of a second three-dimensional virtual environment, a location and orientation of a second virtual object that is in the second virtual environment; for each eyepoint of the eyepoint pair, rendering an inferior image of the second virtual object; for each eyepoint of the eyepoint pair, render a superior image of the first virtual environment, comprising rendering, in the superior image for each eyepoint, the corresponding inferior image onto the virtual stereoscopic display object; and display, on a physical stereoscopic display, the first virtual environment.Type: GrantFiled: May 12, 2023Date of Patent: January 21, 2025Assignee: Tanzle, Inc.Inventor: Michael T. Mayers
-
Patent number: 12206965Abstract: The present invention relates to enhancing the image capturing devices, and to enhance the image producing devices, in such a way that they can capture and produce 3D images without the need of any gear to be worn by the viewers. The proposed solution is inspired by ray optics, geometry, mirrors, diamond cuts, eyes, rods & cones of human eyes, and retina design of human eyes. It doesn't “trick” the eyes and brain, it doesn't manipulate the images like current technologies do to give the 3D effect. The invention describes how the light sensors and light emitters, and their layouts to be changed in imaging devices like cameras and TV screens, to capture and produce 3D images, whether or not the directional information can be captured by image sensors and emitters.Type: GrantFiled: April 18, 2020Date of Patent: January 21, 2025Inventors: Vibha Sharma, Nishant Sharma
-
Patent number: 12205312Abstract: Embodiments of this application provide a method for obtaining depth information and an electronic device, which is applied to the field of image processing technologies and can help an electronic device improve accuracy of obtaining depth information. The electronic device includes a first camera, a second camera, and an infrared projector. The electronic device receives a instruction to obtain depth information of a target object; transmits infrared light with a light spot by an infrared projector; collects first image information of the target object by using the first camera; collects second image information by using the second camera, where the first and second image information include a feature of the target object and a texture feature formed by infrared light; and calculates depth information of the target object based on the first and second image information, the first length, lens focal lengths of the first camera and the second camera.Type: GrantFiled: October 21, 2019Date of Patent: January 21, 2025Assignee: HONOR DEVICE CO., LTD.Inventors: Guoqiao Chen, Mengyou Yuan, Jiangfeng Yuan
-
Patent number: 12196888Abstract: A light shaping optic may include a substrate. The light shaping optic may include a structure disposed on the substrate, wherein the structure is configured to receive one or more input beams of light with a uniform intensity field and less than a threshold total intensity, and wherein the structure is configured to shape the one or more input beams of light to form one or more output beams of light with a non-uniform intensity field and less than the threshold total intensity.Type: GrantFiled: June 14, 2023Date of Patent: January 14, 2025Assignee: VIAVI Solutions Inc.Inventors: Scott Rowlands, Markus Bilger, William D. Houck
-
Patent number: 12196558Abstract: A sensor package and associated method provides roadside sensor-based data. The sensor package includes a controller, a network access device for remote communications, GNSS receiver, an inertial measurement unit (IMU), and environment sensors. The controller operates under software control and communicates with the other sensor package devices to: collect environment sensor data using the sensors; receive GNSS signals indicating current sensor package location; obtain IMU data; determine position/orientation data that includes position data and orientation data based on the orientation data is based on the IMU data; and send the environment sensor data and the position/orientation data of the sensor package to an external device using the network access device. The environment sensor data includes at least one sensor data frame, and the sensor data frame is matched with position/orientation data based on a time associated with collecting the sensor data frame and the position/orientation data.Type: GrantFiled: February 14, 2022Date of Patent: January 14, 2025Assignee: The Regents of the University of MichiganInventors: Tyler S. Worman, Gregory J. McGuire
-
Patent number: 12200356Abstract: A virtual or augmented reality display system that controls power inputs to the display system as a function of image data. Image data itself is made of a plurality of image data frames, each with constituent color components of, and depth planes for displaying on, rendered content. Light sources or spatial light modulators to relay illumination from the light sources may receive signals from a display controller to adjust a power setting to the light source or spatial light modulator, and/or control depth of displayed image content, based on control information embedded in an image data frame.Type: GrantFiled: September 7, 2023Date of Patent: January 14, 2025Assignee: Magic Leap, Inc.Inventors: Jose Felix Rodriguez, Ricardo Martinez Perez, Reza Nourai
-
Patent number: 12198263Abstract: Determining a fit of a real-world electronic gaming machine (EGM) using a virtual representation of an EGM includes obtaining the virtual representation, obtaining sensor data for a real-world environment in which the virtual representation is to be presented, and detecting one or more surfaces in the real-world environment based on the sensor data. A determination is made that the virtual representation of the electronic gaming machine fits within the real-world environment in accordance with the real-world dimensions and the detected one or more surfaces. The virtual representation is presented in a mixed reality environment such that the virtual representation is blended into a view of a real-world environment along the one or more surfaces.Type: GrantFiled: September 30, 2022Date of Patent: January 14, 2025Assignee: Aristocrat Technologies Australia Pty LimitedInventors: Upinder Dhaliwal, Eric Droukas, Patrick Petrella, III
-
Patent number: 12198376Abstract: Method for creating marker-based shared augmented reality (AR) session starts with initializing a shared AR session by a first device and by a second device. The first device displays on a display a marker. The second device detects the marker using a camera included in the second device and captures an image of the marker using the camera. The second device determines a transformation between the first device and the second device using the image of the marker. A common coordinate frame is then determined using the transformation, the shared AR session is generated using the common coordinate frame, and the shared AR session is caused to be displayed by the first device and by the second device. Other embodiments are described herein.Type: GrantFiled: May 6, 2023Date of Patent: January 14, 2025Assignee: Snap Inc.Inventors: Piers Cowburn, David Li, Isac Andreas Müller Sandvik, Qi Pan, Matan Zohar
-
Patent number: 12188759Abstract: The specification provides a depth imaging method and device and a computer-readable storage medium. The method includes: controlling an emission module comprising a light emitting device to emit at least two speckle patterns that change temporally to a target object; controlling an acquisition module comprising a light sensor to acquire reflected speckle patterns of the at least two speckle patterns reflected by the target object; and performing spatial-temporal stereo matching by using the reflected speckle patterns and the at least two reference speckle patterns, to calculate offsets of pixel points between speckles of the at least two reference speckle patterns and speckles of the reflected speckle patterns, and calculating depth values of the pixel points according to the offsets.Type: GrantFiled: June 1, 2022Date of Patent: January 7, 2025Assignee: Orbbec Inc.Inventors: Yuhua Xu, Zhenzhong Xiao, Bin Xu, Yushan Yu
-
Patent number: 12192654Abstract: Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.Type: GrantFiled: April 24, 2024Date of Patent: January 7, 2025Assignee: Corephotonics Ltd.Inventors: Nadav Geva, Michael Scherer, Ephraim Goldenberg, Gal Shabtay
-
Patent number: 12189037Abstract: A three-dimensional point cloud generation method for generating a three-dimensional point cloud including one or more three-dimensional points includes: obtaining (i) a two-dimensional image obtained by imaging a three-dimensional object using a camera and (ii) a first three-dimensional point cloud obtained by sensing the three-dimensional object using a distance sensor; detecting, from the two-dimensional image, one or more attribute values of the two-dimensional image that are associated with a position in the two-dimensional image; and generating a second three-dimensional point cloud including one or more second three-dimensional points each having an attribute value, by performing, for each of the one or more attribute values detected, (i) identifying, from a plurality of three-dimensional points forming the first three-dimensional point cloud, one or more first three-dimensional points to which the position of the attribute value corresponds, and (ii) appending the attribute value to the one or more fiType: GrantFiled: May 19, 2020Date of Patent: January 7, 2025Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Pongsak Lasang, Chi Wang, Zheng Wu, Sheng Mei Shen, Toshiyasu Sugio, Tatsuya Koyama
-
Patent number: 12189835Abstract: A method for controlling a warning system including a user device. The method includes, in the warning system: receiving a wireless signal transmitted towards a surface on which a user is located, of which the range defines, on the surface, a plurality of zones including a no-warning first zone and a warning second zone, a characteristic relating to the transmitted signal being able to take a second value corresponding to the second zone; upon detecting the characteristic having the second value, generating a warning signal, and transmitting the warning signal to a control unit of the warning system in order to notify a user of the system.Type: GrantFiled: December 9, 2019Date of Patent: January 7, 2025Assignee: ORANGEInventors: Sylvain Leroux, Thierry Gaillet
-
Patent number: 12189106Abstract: A microinstrument system comprises: a microinstrument, having at least one integrated optical fiber which has a distal end facing the object to be observed; a recording apparatus, to which light from the object to be observed can be supplied for recording image data with the aid of the at least one optical fiber; a determining device, which is designed to determine the positions of the distal end of the at least one optical fiber at the recording times of the particular image data; wherein a data-processing device, connected to the recording apparatus in order to receive the image data; is connected to the determining device in order to receive the position data; and is designed to compile the image data with the aid of the position data to form a two-dimensional or three-dimensional image.Type: GrantFiled: April 14, 2022Date of Patent: January 7, 2025Assignee: CARL ZEISS MEDITEC AGInventors: Delbert Peter Andrews, Christian Voigt, Carolin Klusmann, Matthias Hillenbrand, Max Riedel, Christian Marzi, Franziska Mathis-Ullrich, Fritz Hengerer
-
Patent number: 12184827Abstract: Method and arrangements for obtaining and associating 2D image data with 3D image data are provided. the image data being based on light triangulation, performed by an imaging system, where a measure object with first light and an image sensor senses reflected first light from a measure object, during a first exposure period resulting in a first image with first intensity peaks occurring at first sensor positions, SP1. The measure object is also illuminated with second light(s) and any reflected second light is sensed from the measure object by the image sensor during third exposure period(s) resulting in one or more third images. For respective first sensor position, SP1, in the first image it is selected a respective third sensor position, SP3, in said one or more third images.Type: GrantFiled: January 19, 2023Date of Patent: December 31, 2024Assignee: SICK IVP ABInventors: Anders Murhed, Mattias Johannesson
-
Patent number: 12178685Abstract: A dental scanning system comprises an intraoral scanner and one or more processors. The intraoral scanner comprises one or more light projectors configured to project a pattern (comprising a plurality of pattern features) on a dental object, and two or more cameras configured to acquire sets of images, each comprising at least one image from each camera. The processor(s) are configured to determine a correspondence between pattern features in the pattern of light and image features in each set of images by determining intersections of projector rays corresponding to one or more of the plurality of pattern features and camera rays corresponding to the one or more image features in three-dimensional (3D) space based on calibration data that associates the camera rays corresponding to pixels on the camera sensor of each of the two or more cameras to the projector rays.Type: GrantFiled: January 19, 2024Date of Patent: December 31, 2024Assignee: Align Technology, Inc.Inventors: Ofer Saphier, Yossef Atiya, Arkady Rudnitsky, Nir Makmel, Sergei Ozerov, Tal Verker, Tal Levy
-
Patent number: 12179755Abstract: A trajectory generation device 1 determines first to third predicted trajectories by using model formulas (1) to (20) that model a trajectory of a host vehicle 3 on the basis of peripheral information of an information detection device 4, calculates a risk potential R_p_mi_vj_v_r by using the first to third predicted trajectories, and determines moving average values P_c1 to P_c3 so that the risk potential R_p_mi_vj_v_r decreases. In a case where the moving average values P_c1 and P_c2 have the same sign, the first predicted trajectory is determined while reflecting the moving average value P_c2 for correcting the second predicted trajectory.Type: GrantFiled: March 26, 2022Date of Patent: December 31, 2024Assignee: HONDA MOTOR CO., LTD.Inventor: Yuji Yasui
-
Patent number: 12183063Abstract: In examples, image data representative of an image of a field of view of at least one sensor may be received. Source areas may be defined that correspond to a region of the image. Areas and/or dimensions of at least some of the source areas may decrease along at least one direction relative to a perspective of the at least one sensor. A downsampled version of the region (e.g., a downsampled image or feature map of a neural network) may be generated from the source areas based at least in part on mapping the source areas to cells of the downsampled version of the region. Resolutions of the region that are captured by the cells may correspond to the areas of the source areas, such that certain portions of the region (e.g., portions at a far distance from the sensor) retain higher resolution than others.Type: GrantFiled: June 30, 2020Date of Patent: December 31, 2024Assignee: NVIDIA CorporationInventors: Haiguang Wen, Bernhard Firner, Mariusz Bojarski, Zongyi Yang, Urs Muller
-
Patent number: 12174300Abstract: A method of measuring a distance between a vehicle and one or more objects, includes generating a modulation signal; generating a modulated light emitting diode (LED) transmission signal, via a vehicle LED driver assembly; transmitting a plurality of light beams based at least in part on the generated modulated LED transmission signal; capturing a reflection of the plurality of light beams off the one or more objects, utilizing one or more lens assemblies and a camera, the camera including an array of pixel sensors and being positioned on the vehicle; communicating a series of measurements representing the captured plurality of light beam reflections; calculating, utilizing the time-of-flight sensor module, time of flight measurements between the vehicle LED light assembly and the one or more objects and calculating distances, utilizing a depth processor module, between the vehicle LED light assembly and the one or more objects based on the time-of-flight measurements.Type: GrantFiled: July 21, 2024Date of Patent: December 24, 2024Assignee: Wireless Photonics, LLCInventors: Bahram Jalali, Alexandra Jalali, Mehdi Hatamian, Ahmadreza Rofougaran
-
Patent number: 12169370Abstract: The invention is directed to a device for exposure control in photolithographic direct exposure processes for two-dimensional structures in photosensitive coatings and to a method for converting registration data into direct exposure data. The object of the invention, to find an improved exposure control in direct exposure methods for two-dimensional structures in photosensitive layers which permits a registration of target marks independent from defined locations of the target marks, is met according to the invention in that a plurality of entocentric cameras are arranged in a registration unit (1) in linear alignment transverse to the one-dimensional movement of the substrate (2) to form a gapless linear scanning area (23) over a predetermined width of the substrate (2).Type: GrantFiled: September 15, 2021Date of Patent: December 17, 2024Assignee: Laser Imaging Systems GmbHInventors: Christian Schwarz, Jonas Burghoff, Stefan Heinemann, Holger Wagner, Steffen Rücker, Frank Jugel
-
Patent number: 12169954Abstract: Method and computing device for inferring via a neural network environmental data of an area of a building based on visible and thermal images of the area. A predictive model generated by a neural network training engine is stored by the computing device. The computing device determines a visible image of an area based on data received from at least one visible imaging camera. The computing device determines a thermal image of the area based on data received from at least one thermal imaging device. The computing device executes a neural network inference engine, using the predictive model for inferring environmental data based on the visible image and the thermal image. The inferred environmental data comprise geometric characteristic(s) of the area, an occupancy of the area, a human activity in the area, temperature value(s) for the area, and luminosity value(s) for the area.Type: GrantFiled: April 6, 2018Date of Patent: December 17, 2024Assignee: DISTECH CONTROLS INC.Inventor: Francois Gervais
-
Patent number: 12151385Abstract: A measurement system includes a multi-axis robot, a measurement unit coupled to the multi-axis robot, and a data processing apparatus, wherein the measurement unit includes one or more imaging devices movable with respect to a reference position of the multi-axis robot, and a position specification device for specifying a position of one or more of the imaging devices with respect to the reference position, wherein the data processing apparatus includes an acquisition part for acquiring a plurality of pieces of captured image data generated by having one or more of the imaging devices capture images at two or more positions, and a measurement part for measuring a distance between the plurality of feature points in a workpiece on the basis of a position of the feature point of the workpiece included in the plurality of pieces of captured image data.Type: GrantFiled: August 23, 2022Date of Patent: November 26, 2024Assignee: MITUTOYO CORPORATIONInventor: Naoki Mitsutani
-
Patent number: 12154528Abstract: A display device comprising: a display panel to display a frame having a resolution up to a first frame rate; a plurality of image processing units; and a processor. The processor configured to, based on change from a first mode of displaying a content, among a plurality of modes of the display device, to a second mode of displaying plural contents, change frame rates of a plurality of contents output from the plurality of image processing units to a same frame rate, mix the plurality of contents with the frame rates changed to the same frame rate, and change a vertical resolution of the mixed content. The display panel is enabled to output the mixed content, at a second frame rate greater than the first frame rate.Type: GrantFiled: March 16, 2023Date of Patent: November 26, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Jihye Sim
-
Patent number: 12154222Abstract: A system to build a three dimensional model of an at least one user's denture which comprises at least one intraoral device to be arranged in the oral cavity of the at least one user; at least one set of cameras arranged in the at least one intraoral device to capture at least one stereo image of the oral cavity of the at least one user; and at least one processing medium which receives the at least one stereo image; wherein at least the one processing médium comprises at least one trained neuronal network which analyzes the at least one stereo image to estimate at least one depth map; and wherein the at least one processing medium also comprises at least one block of location and mapping which sequentially integrates the at least one stereo image and the at least one depth map to the generated three dimensional model.Type: GrantFiled: March 29, 2023Date of Patent: November 26, 2024Inventors: Gerard André Philip Liberman Paz, Javier Ignacio Liberman Salazar, Felipe Ignacio Pesce Bentjerodt, Carlos Julio Santander Guerra, Cristobal Gaspar Ignacio Pizarro Venegas, Diego Facundo Lazcano Arcos, Andrés Garabed Baloian Gacitúa, David Caro Benado
-
Patent number: 12153136Abstract: In some embodiments, systems are provided to determine distances to one or more objects, and comprise an image capture system; an angular movement system; an illumination source providing illumination having a time varying amplitude; and a range measurement circuit configured to: obtain, based on at least one of measured and induced LOS angular displacement changes relative to the image capture system, a set of candidate point-spread-functions (PSFs) corresponding to different possible ranges; deconvolve, using the candidate PSFs, at least a region of interest (ROI) in an evaluation image to obtain at least a set of deconvolved ROIs of the evaluation image, each corresponding to one of the candidate PSFs; identify a first candidate PSF that produces a deconvolved ROI resulting in a determined artifact power that is lower than a corresponding artifact power determined from the other candidate PSFs; and determine a distance corresponding to the first candidate PSF.Type: GrantFiled: March 19, 2021Date of Patent: November 26, 2024Assignee: General Atomics Aeronautical Systems, Inc.Inventors: Patrick R. Mickel, Drew F. DeJarnette, Matthew C. Cristina
-
Patent number: 12147608Abstract: A computing system includes: a first input device for receiving a first input data from a user, the first input device includes a computing module for processing a data inputted from the first input device and generating an output data; and a head mounted display, telecommunicatively connected to the first input device by a communication module, for receiving the output data and displaying a binocular virtual image related to the output data; wherein the head mounted display has a first light direction adjuster and a second light direction adjuster for changing the direction of a first light signal and a second light signal emitted by a first emitter and a second emitter respectively, such that the first light signal and the second light signal are emitted relative to the first angle and the second angle of the user's frontal plane towards a first eye and a second eye respectively.Type: GrantFiled: July 7, 2023Date of Patent: November 19, 2024Assignee: HES IP HOLDINGS, LLCInventors: Sheng-Lan Tseng, Yi-An Chen, Yung-Chin Hsiao
-
Patent number: 12137855Abstract: There is provided a mobile robot including a light source, an image sensor and a processor. The image sensor respectively captures bright image frames and dark image frames corresponding to the light source being turned on and turned off. The processor identifies that the mobile robot faces a cliff when a gray level variation between the bright image frames and the dark image frames is very small, and controls the mobile robot to stop moving forward continuously.Type: GrantFiled: June 22, 2021Date of Patent: November 12, 2024Assignee: PIXART IMAGING INC.Inventors: Keen-Hun Leong, Sai-Mun Lee, Ching-Geak Chan
-
Patent number: 12142004Abstract: Embodiments of the present disclosure provide an image display method and apparatus, and an electronic device. The method includes: obtaining target position information of a user in a real environment when the user is in a virtual environment; obtaining first image data corresponding to the real environment and second image data corresponding to the virtual environment when the target position information meets a predetermined condition, both the first image data and the second image data being based on a same coordinate space; performing superposition and merging processing on the first image data and the second image data to generate third image data; and displaying the third image data, to enable the user to view environment data in the real environment when the user is in the virtual environment.Type: GrantFiled: December 29, 2023Date of Patent: November 12, 2024Assignee: QINGDAO PICO TECHNOLOGY CO., LTD.Inventor: Tao Wu
-
Patent number: 12141367Abstract: Example systems, devices, media, and methods are described for controlling one or more virtual elements on a display in response to hand gestures detected by an eyewear device that is capturing frames of video data with its camera system. An image processing system detects a hand and presents a menu icon on the display in accordance with a detected current hand location. The image processing system detects a series of hand shapes in the captured frames of video data and determines whether the detected hand shapes match any of a plurality of predefined hand gestures stored in a hand gesture library. In response to a match, the method includes executing an action in accordance with the matching hand gesture. In response to an opening gesture, an element animation system presents one or more graphical elements incrementally moving along a path extending away from the menu icon. A closing hand gesture causes the elements to retreat along the path toward the menu icon.Type: GrantFiled: November 3, 2023Date of Patent: November 12, 2024Assignee: Snap Inc.Inventors: Viktoria Hwang, Karen Stolzenberg
-
Patent number: 12141904Abstract: A method includes displaying, via a display device, a virtual agent moving according to a motion type. The virtual agent is defined by a plurality of virtual joints and motions of the virtual agent are controllable by providing a corresponding plurality of torques to the plurality of virtual joints. The method includes, while the virtual agent is moving according to the motion type, registering an interaction event with the virtual agent. The interaction event initiates a change to the motion type. The method includes, in response to registering the interaction event, generating, using a motion controller, a plurality of torque values for the virtual agent based on a function of the motion type and the interaction event, and generating an animation for the virtual agent by providing the plurality of torque values to the plurality of virtual joints of the virtual agent.Type: GrantFiled: July 25, 2022Date of Patent: November 12, 2024Assignee: APPLE INC.Inventors: Siva Chandra Mouli Sivapurapu, Edward S. Ahn, Mark Drummond, Aashi Manglik
-
Patent number: 12135371Abstract: A method may include identifying features in images of a setting captured by cameras associated with a laser scanner at a first location. The features may correspond to elements in the setting. The method may include correlating the features with scan point data associated with the elements. The scan point data may be derived from a scan of the setting by a laser of the laser scanner at the first location. The method may include tracking the features within images captured by the cameras. The images may be captured during movement of the laser scanner away from the first location. The method may include estimating a position and/or an orientation of the laser scanner at a second location based on the tracking of the features and based on the correlating of the features with the scan point data.Type: GrantFiled: December 2, 2019Date of Patent: November 5, 2024Assignee: LEICA GEOSYSTEMS AGInventors: Gregory Walsh, Roman Parys, Alexander Velizhev, Bernhard Metzler
-
Patent number: 12136163Abstract: Measures, including a method, system, and computer program, for performing in-mission photogrammetry while an imaging platform is surveying a target environment. A plurality of images of the target environment are captured by the imaging platform. At the imaging platform, a representative subset of images is identified from the captured plurality of images. The imaging platform transmits the identified representative subset to a processing node. At the processing node, the transmitted representative subset is processed to generate a virtual model of the target environment.Type: GrantFiled: May 18, 2021Date of Patent: November 5, 2024Assignee: Airbus Defence and Space LimitedInventor: Stuart Taylor
-
Patent number: 12135493Abstract: A camera module disclosed in the embodiment includes a light emitting portion and an image sensor, wherein the light emitting portion includes a light source, a first lens portion disposed on the light source, a reflective member reflecting light emitted from the light source, and a driving member for moving the first lens portion between the light source and the reflective member, wherein the reflective member includes a plurality of mirrors, and the plurality of mirrors may be arranged to be tiltable within a predetermined angle range from a reference angle.Type: GrantFiled: March 4, 2021Date of Patent: November 5, 2024Assignee: LG INNOTEK CO., LTD.Inventors: In Jun Seo, Chul Kim
-
Patent number: 12132881Abstract: A stereo camera apparatus includes a housing, a first camera unit attached to the housing, a second camera unit attached to the housing, a processing device that performs image processing by using captured images acquired by capturing of the first camera unit and the second camera unit, and a circuit board on which the processing device is mounted. In the housing, a base length direction of the first camera unit and the second camera unit is a longitudinal direction, the housing and the circuit board are bonded to each other by an adhesive, and in a region in the housing, onto which the adhesive is applied, a length in the base line length direction is shorter than a length in an orthogonal direction that is a direction perpendicular to the base length direction.Type: GrantFiled: September 17, 2020Date of Patent: October 29, 2024Assignee: HITACHI ASTEMO, LTD.Inventors: Akihiro Yamaguchi, Hidenori Shinohara, Kenichi Takeuchi
-
Patent number: 12130448Abstract: An embodiment comprises: a housing including a first corner portion, a second corner portion, a third corner portion, and a fourth corner portion; a bobbin disposed within the housing; a coil disposed on the bobbin; a magnet disposed in the housing while being arranged opposite to the coil; a circuit board disposed on one side of the housing and including a position sensor; and a sensing magnet disposed on the bobbin while being arranged opposite to the position sensor, wherein: the magnet comprises a first magnet disposed in the first corner portion of the housing, and a second magnet disposed in the second corner portion opposite to the first corner portion of the housing; the position sensor is disposed closer to the third corner portion than to the first corner portion; and no magnet is disposed in the third corner portion of the housing.Type: GrantFiled: December 26, 2018Date of Patent: October 29, 2024Assignee: LG INNOTEK CO., LTD.Inventors: Hyun Soo Kim, Sung Guk Lee, Jung Seok Oh, Seung Hak Lee
-
Patent number: 12131494Abstract: A tire groove depth display method includes a grounded part extracting step, a groove depth calculating step, an image generating step, and a displaying step. The grounded part extracting step extracts a grounded part on a surface of a tread part of a tire as a plurality of areas sandwiching a tire groove. The groove depth calculating step calculates a depth of the tire groove between respective areas of the grounded part. The image generating step generates an image that shows the areas extracted by the grounded part extracting step and information on the depth of the tire groove calculated by the groove depth calculating step. The displaying step displays the image generated by the image generating step.Type: GrantFiled: September 8, 2022Date of Patent: October 29, 2024Assignee: TOYO TIRE CORPORATIONInventors: Takaya Maekawa, Siyi Ren, George Lashkhia
-
Patent number: 12125226Abstract: An image processor includes a map generator that generates a map being data including pixels each associated with depth-related information using an image captured with patterned light projected onto a target object, an edge detector that detects an edge of the target object using an image captured without patterned light projected onto the target object, and a corrector that corrects the map based on the detected edge to cause a discontinuous depth position to match a position of the edge of the target object.Type: GrantFiled: September 10, 2019Date of Patent: October 22, 2024Assignee: OMRON CORPORATIONInventor: Shinya Matsumoto