Picture Signal Generator Patents (Class 348/46)
-
Patent number: 12368975Abstract: Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.Type: GrantFiled: November 26, 2024Date of Patent: July 22, 2025Assignee: Corephotonics Ltd.Inventors: Nadav Geva, Michael Scherer, Ephraim Goldenberg, Gal Shabtay
-
Patent number: 12368839Abstract: An autonomous or semi-autonomous vehicle camera system including a camera having a field of view, wherein the camera is operable to receive optical information in the field of view. A display located in the camera field of view. A controller in electrical connection with the camera, wherein the controller is operable to conduct a Built-in-Test. Wherein the Built-in-Test is configured to present one or more images in the camera field of view via the display to determine functionality of the camera system.Type: GrantFiled: March 8, 2021Date of Patent: July 22, 2025Assignee: Moog Inc.Inventors: Mark J. Underhill, Richard Fosdick, Michael G. Fattey
-
Patent number: 12352558Abstract: An illumination device has a light source unit, a lens unit, and a filter unit An imaging device receives object light, generated by the illumination light, from the measurement object at a predetermined observation solid angle, and pixels of the imaging device can each identify the different light wavelength ranges. A processing device includes an arithmetic unit configured to obtain a normal vector at each point of the measurement object corresponding to each pixel from inclusion relation between the plurality of solid angle regions, constituting the object light, and the predetermined observation solid angle, and a shape reconstruction unit configured to reconstruct the shape of the measurement object.Type: GrantFiled: November 26, 2020Date of Patent: July 8, 2025Assignees: MACHINE VISION LIGHTING INC., MITUTOYO CORPORATIONInventors: Shigeki Masumura, Yasuhiro Takahama, Jyota Miyakura, Masaoki Yamagata
-
Patent number: 12348908Abstract: A method and system for enhancement of video systems using wireless device proximity detection. The enhanced video system consists of one or more video capture devices along with one or more sensors detecting the presence of devices with some form of wireless communications enabled. The proximity of a device communicating wirelessly is sensed and cross referenced with received video image information. Through time, movement of wirelessly communicating mobile devices through a venue or set of venues can be deduced and additionally cross referenced to and augmented over image data from the set of video capture devices.Type: GrantFiled: August 8, 2022Date of Patent: July 1, 2025Assignee: INPIXONInventors: James Francis Hallett, Kirk Arnold Moir
-
Patent number: 12330924Abstract: An automatic truck unloading apparatus and method are provided. The automatic truck unloading apparatus generates sensing information regarding the presence or absence of a pallet on the truck by implementing sensors installed in a region of a truck and a region of a storage area, and sets optimal transport paths for multiple unmanned forklift vehicles based on the sensing information, and unloads a pallet from the truck and moves and stores the pallet in the storage area by implementing an unmanned forklift vehicle.Type: GrantFiled: April 18, 2022Date of Patent: June 17, 2025Assignee: Hyundai Mobis Co., Ltd.Inventor: Young Min Kim
-
Patent number: 12315198Abstract: For three-dimensional image capture with the aid of a stereo camera having two cameras, an image of a three-dimensional scene is first captured simultaneously with the two cameras. Characteristic signatures of scene objects within each captured image are determined and assigned to each other in pairs. Characteristic position deviations of the assigned signature pairs from each other are determined. The position deviations are filtered in order to select assigned signature pairs. Based on the selected signature pairs, a triangulation calculation is performed to determine depth data for the respective scene objects. A 3D data map of the captured scene objects within the captured image of the three-dimensional scene is then created. This results in a method for capturing three-dimensional images, which is well adapted for practical use, in particular, for capturing images to safeguard autonomous driving.Type: GrantFiled: September 28, 2021Date of Patent: May 27, 2025Assignee: Tripleye GmbHInventors: Jens Schick, Michael Scharrer
-
Patent number: 12300032Abstract: The information processing device determines whether or not an inputted iris image is an iris image of a color contact lens. In the information processing device, the acquisition means (71) acquires the target iris image which is the iris image of processing target. The search means (72) searches one or more similar registered iris images that are similar to the target iris images, from the registered iris images. The iris image determination means (73) determines that the target iris image and the similar registered iris image are iris images of the color contact lens when the person corresponding to the target iris image and the person corresponding to the similar registered iris image are different persons.Type: GrantFiled: July 16, 2020Date of Patent: May 13, 2025Assignee: NEC CORPORATIONInventor: Toshiyuki Sashihara
-
Patent number: 12299822Abstract: The present disclosure provides a virtual clothing changing method, apparatus, electronic device and readable medium. The method comprises: obtaining a source image and a target image, the source image comprising target clothing associated with a first human instance, the target image comprising a second human instance; obtaining first portrait information and first pose information of the source image and second pose information of the target image, respectively, by processing the source image and the target image; obtaining a clothing changed image by changing clothing of the second human instance in the target image to the target clothing in the source image according to the first portrait information, the first pose information and the second pose information.Type: GrantFiled: June 14, 2024Date of Patent: May 13, 2025Assignee: BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.Inventors: Xin Dong, Xijin Zhang
-
Patent number: 12293537Abstract: A virtual reality experience safe area updating method and apparatus are provided. belonging to a Virtual Reality technical field. Virtual reality experience safe area update unit, including: A safe area setting module configured to set an initial safe area; obstacle detecting module, configured to detect obstacle around the user; the safe area update module is configured to update the range of the safe area according to the detected obstacle.Type: GrantFiled: May 28, 2021Date of Patent: May 6, 2025Assignees: Beijing BOE Optoelectronics Technology Co., Ltd., BOE Technology Group Co., Ltd.Inventors: Zhifu Li, Jinghua Miao, Wenyu Li, Mingyang Yan
-
Patent number: 12293489Abstract: A virtual image display device includes a first acquisition unit configured to acquire a first image from an information terminal, a second acquisition unit configured to acquire a second image different from the first image, a correction unit configured to distort the first image, a generation unit configured to generate a third image or a fourth image including the distorted first image and the second image, and a display element configured to display the third image or the fourth image.Type: GrantFiled: January 30, 2023Date of Patent: May 6, 2025Assignee: SEIKO EPSON CORPORATIONInventor: Toshiyuki Noguchi
-
Patent number: 12293473Abstract: A mobile device may provide an augmented reality display of a scene. The mobile device may include a camera for capturing a scene comprising one or more objects, a display for displaying an image or video of the scene, a processor, and a memory. The mobile device may display an image of the scene on the display. The mobile device may receive real-time data associated with a mathematical relationship between a plurality of parameters. The mobile device may overlay a three-dimensional plot and an information field over the displayed image. The mobile device may display curvature information on the three-dimensional plot. The mobile device may display a modified three-dimensional plot using updated curvature information and/or a modified input. In response to selection of a point on the three-dimensional plot, the mobile device may display a plurality of vectors extending from the point indicating rate of change information.Type: GrantFiled: August 31, 2022Date of Patent: May 6, 2025Inventor: Jiecong Lu
-
Patent number: 12287188Abstract: Provided is an information processing apparatus that includes an estimation unit and a correction unit. The estimation unit estimates a motion vector of a distance measurement target based on distance information on a distance to the distance measurement target input from a distance measuring device that measures the distance to the distance measurement target and motion information of the distance measuring device input from a motion detection device that detects a motion of the distance measuring device. The correction unit corrects the distance information based on the motion vector of the distance measurement target estimated by the estimation unit.Type: GrantFiled: May 6, 2021Date of Patent: April 29, 2025Assignee: SONY GROUP CORPORATIONInventors: Yuta Sakurai, Kazunori Kamio, Satoshi Kawata, Toshiyuki Sasaki
-
Patent number: 12272004Abstract: In some implementations, a method includes obtaining an end state of a first content item spanning a first time duration. In some implementations, the end state of the first content item indicates a first state of a synthesized reality (SR) agent at the end of the first time duration. In some implementations, the method includes obtaining an initial state of a second content item spanning a second time duration subsequent the first time duration. In some implementations, the initial state of the second content item indicates a second state of the SR agent at the beginning of the second time duration. In some implementations, the method includes synthesizing an intermediary emergent content item spanning over an intermediary time duration that is between the end of the first time duration and the beginning of the second time duration.Type: GrantFiled: October 26, 2023Date of Patent: April 8, 2025Assignee: APPLE INC.Inventor: Ian M. Richter
-
Patent number: 12264909Abstract: A material application and analysis device may comprise at least one analysis device for optically monitoring at least a first material application and a second material application, and a material application element for applying the second material application to a substrate provided with the first material application at least in sections. The material application element is arranged between a first radiation source and detection device assembly and a second radiation source and detection device assembly, wherein by the first radiation source and detection device assembly the first material application is detectable and wherein by the second radiation source and detection device assembly the second material application is detectable. Furthermore, first image data are processed and second image data are processed, and the processed first image data are evaluated with respect to a physical parameter and the processed second image data are evaluated with respect to a geometrical parameter.Type: GrantFiled: March 25, 2021Date of Patent: April 1, 2025Assignee: QUISS QUALITAETS-INSPEKTIONSSYSTEME UND SERVICE GMBHInventor: Bernhard Gruber
-
Patent number: 12257110Abstract: A system to build a three-dimensional model of a user's teeth and adjoining tissues which comprises at least one intraoral device to be arranged in the oral cavity of the user; at least one set of cameras arranged in the intraoral device to capture at least one stereo image of the oral cavity of the at least one user; and at least one processing medium which receives the at least one stereo image; wherein at least the one processing medium comprises at least one trained neural network which analyzes the at least one stereo image to estimate at least one depth map; and wherein the at least one processing medium also comprises at least one block of location and mapping which sequentially integrates the at least one stereo image and the at least one depth map to the generated three-dimensional model.Type: GrantFiled: October 18, 2024Date of Patent: March 25, 2025Inventors: Gerard Andre Philip Liberman Paz, Javier Ignacio Liberman Salazar, Felipe Ignacio Pesce Bentjerodt, Carlos Julio Santander Guerra, Cristobal Gaspar Ignacio Pizarro Venegas, Diego Facundo Lazcano Arcos, Andres Garabed Baloian Gacitua, David Caro Benado
-
Patent number: 12256051Abstract: The present technology relates to an image processing apparatus and method and a program that enable natural representation of light in an image in accordance with a viewpoint. The image processing apparatus calculates information indicating a change in a light source region between an input image and a viewpoint-converted image that is obtained by performing viewpoint conversion on the input image on the basis of a specified viewpoint, and causes a change in representation of light in the viewpoint-converted image on the basis of the calculated information indicating a change in the light source region. The present technology can be applied to an image display system that generates a pseudo stereoscopic image with motion parallax from one image.Type: GrantFiled: March 31, 2021Date of Patent: March 18, 2025Assignee: SONY GROUP CORPORATIONInventor: Kenichiro Hosokawa
-
Patent number: 12236629Abstract: A request signal that indicates a quality for a determination of an orientation of a road user is received. The orientation of the road user is determined based on a) image data when the request signal indicates the quality is below a predetermined quality for the determination of the orientation of the road user, or b) on LIDAR data and image data when the request signal indicates the quality is the predetermined quality.Type: GrantFiled: September 29, 2020Date of Patent: February 25, 2025Assignee: Ford Global Technologies, LLCInventors: Turgay Isik Aslandere, Evangelos Bitsanis, Michael Marbaix, Alain Marie Roger Chevalier, Frederic Stefan
-
Patent number: 12229880Abstract: The disclosure provides a method for generating relightable 3D portrait using a deep neural network and a computing device implementing the method. A possibility of obtaining, in real time and on computing devices having limited processing resources, realistically relighted 3D portraits having quality higher or at least comparable to quality achieved by prior art solutions, but without utilizing complex and costly equipment is provided.Type: GrantFiled: November 20, 2023Date of Patent: February 18, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Artem Mikhailovich Sevastopolskiy, Victor Sergeevich Lempitsky
-
Patent number: 12229994Abstract: Disclosed are various embodiments for evaluating performance metrics (e.g., accuracy, depth precision, curvature accuracy, coverage, data acquisition time, etc.) of sensors (e.g., cameras, depth cameras, color cameras, etc.) according to captured image data. One or more reference boards having different types of reference structures (e.g., three-dimensional shapes, materials, three-dimensional patterns (e.g., waves, steps, etc.), gaps, etc.) that are used to evaluate the performance properties of a sensor. A reference board is attached to a robotic arm and positioned in front of a sensor. The robotic arm positions the reference board in front the sensor in different viewpoints while the sensor captures image data associated with the reference board. The captured image data is compared with ground truth data associated with the reference board to determine performance metrics of the sensor.Type: GrantFiled: August 23, 2021Date of Patent: February 18, 2025Assignee: Amazon Technologies, Inc.Inventor: Johannes Kulick
-
Patent number: 12223602Abstract: Augmented reality (AR) systems, devices, media, and methods are described for creating a handcrafted AR experience. The handcrafted AR experiences are created by capturing images of a scene, identifying an object receiving surface and corresponding surface coordinates, identifying a customizable AR primary object associated with at least one set of primary object coordinates, generating AR overlays including the customizable AR primary object for positioning adjacent the object receiving surface, presenting the AR overlays, receiving customization commands, generating handcrafted AR overlays including customizations associated with the customizable AR primary object responsive to the customization commands, presenting the handcrafted AR overlays, recording the handcrafted AR overlays, creating a handcrafted AR file including the recorded overlays, and transmitting the handcrafted AR file.Type: GrantFiled: August 15, 2022Date of Patent: February 11, 2025Assignee: Snap Inc.Inventors: Tianying Chen, Timothy Chong, Sven Kratz, Fannie Liu, Andrés Monroy-Hernández, Olivia Seow, Yu Jiang Tham, Rajan Vaish, Lei Zhang
-
Patent number: 12222423Abstract: A method of measuring a distance between a vehicle and one or more objects, includes generating a modulation signal; generating a modulated light emitting diode (LED) transmission signal, via a vehicle LED driver assembly; transmitting a plurality of light beams based at least in part on the generated modulated LED transmission signal; capturing a reflection of the plurality of light beams off the one or more objects, utilizing one or more lens assemblies and a camera, the camera including an array of pixel sensors and being positioned on the vehicle; communicating a series of measurements representing the captured plurality of light beam reflections; calculating, utilizing the time-of-flight sensor module, time of flight measurements between the vehicle LED light assembly and the one or more objects and calculating distances, utilizing a depth processor module, between the vehicle LED light assembly and the one or more objects based on the time-of-flight measurements.Type: GrantFiled: August 19, 2024Date of Patent: February 11, 2025Assignee: Wireless Photonics, LLCInventors: Bahram Jalali, Alexandra Jalali, Mehdi Hatamian, Ahmadreza Rofougaran
-
Patent number: 12222455Abstract: A time-of-flight apparatus has: a light source for emitting light to a scene; a light detector for detecting light from the scene; and a control, the control being configured to: acquire a frame of detected light from the light detector, wherein the frame corresponds to a predetermined time interval, and drive the light source for emitting light during the acquisition of the frame, wherein the light energy accumulated within the frame has a predetermined value, and wherein the frame is divided into active light time intervals during which light is emitted to the scene.Type: GrantFiled: September 17, 2019Date of Patent: February 11, 2025Assignee: Sony Semiconductor Solutions CorporationInventors: Manuel Amaya-Benitez, Ward Van Der Tempel, Kaji Nobuaki, Hiroaki Nishimori
-
Patent number: 12225066Abstract: Method, device, and non-transitory storage medium for adaptive streaming of immersive media are provided. The method may include determining characteristics associated with a scene to be transmitted to the end client, adjusting at least a part of the scene to be transmitted to the end client based on the determined characteristics, and transmitting an adaptive stream of the lightfield or holographic immersive media comprising the adjusted scene based on the determined characteristics.Type: GrantFiled: October 21, 2022Date of Patent: February 11, 2025Assignee: TENCENT AMERICA LLCInventors: Rohit Abhishek, Arianne Hinds
-
Patent number: 12213836Abstract: Systems and methods for preprocessing three dimensional (3D) data prior to generating inverted renders are disclosed herein. The preprocessing may include segmenting the 3D data to remove portions of the data associated with noise such that those portions do not appear in the generated render. The segmentation may include applying a mask to the 3D data. The mask may be generated by sorting data points in the 3D data set into a first set or a second set. In some examples, the 3D data may be filtered prior to generating the mask. In some examples, the mask may be adjusted based on feature recognition. The preprocessing may allow the visualization of hypoechoic regions of interest in a volume.Type: GrantFiled: February 21, 2020Date of Patent: February 4, 2025Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Paul Sheeran, Thanasis Loupas, Charles Tremblay-Darveau
-
Patent number: 12210145Abstract: A single-particle localization microscope, including an optical system configured to illuminate a sample region with a sequence of light patterns having spatially different distributions of illumination light adapted to cause a single particle located in the sample region to emit detection light, a detector configured to detect a sequence of intensities of the detection light emerging from the sample region in response to the sequence of illuminating light patterns, and a processor configured to determine, based on the sequence of intensities of the detection light, an arrangement of potential positions for locating the particle. The processor further illuminates the sample region with at least one subsequent light pattern, causes detection of at least one subsequent intensity, and decides, based on the at least one subsequent intensity of the detection light, which one of the multiple potential positions represents an actual position of the particle in the sample region.Type: GrantFiled: January 16, 2023Date of Patent: January 28, 2025Assignee: LEICA MICROSYSTEMS CMS GMBHInventor: Marcus Dyba
-
Patent number: 12212732Abstract: There is provided an image processing apparatus comprising. An obtainment unit obtains a first circular fisheye image accompanied by a first missing region in which no pixel value is present. A generation unit generates a first equidistant cylindrical projection image by performing first equidistant cylindrical transformation processing based on the first circular fisheye image. The generation unit generates the first equidistant cylindrical projection image such that a first corresponding region corresponding to the first missing region has a pixel value in the first equidistant cylindrical projection image.Type: GrantFiled: May 27, 2022Date of Patent: January 28, 2025Assignee: Canon Kabushiki KaishaInventors: Toru Kobayashi, Toshitaka Aiba, Yuki Tsuihiji
-
Patent number: 12212823Abstract: Auto exposure processing for spherical images improves image quality by reducing visible exposure level variation along a stitch line within a spherical image. An average global luminance value is determined based on luminance values determined for first and second images, which are based on auto exposure configurations of first and second image sensors used to obtain those first and second images. Delta luminance values are determined for the first and second images using the average global luminance value. The first and second images are then updated using the delta luminance values, and the updated first and second images are used to produce a spherical image.Type: GrantFiled: November 9, 2023Date of Patent: January 28, 2025Assignee: GoPro, Inc.Inventors: Guillaume Matthieu Guérin, Sylvain Leroy, Yoël Taïeb, Giuseppe Moschetti
-
Patent number: 12205312Abstract: Embodiments of this application provide a method for obtaining depth information and an electronic device, which is applied to the field of image processing technologies and can help an electronic device improve accuracy of obtaining depth information. The electronic device includes a first camera, a second camera, and an infrared projector. The electronic device receives a instruction to obtain depth information of a target object; transmits infrared light with a light spot by an infrared projector; collects first image information of the target object by using the first camera; collects second image information by using the second camera, where the first and second image information include a feature of the target object and a texture feature formed by infrared light; and calculates depth information of the target object based on the first and second image information, the first length, lens focal lengths of the first camera and the second camera.Type: GrantFiled: October 21, 2019Date of Patent: January 21, 2025Assignee: HONOR DEVICE CO., LTD.Inventors: Guoqiao Chen, Mengyou Yuan, Jiangfeng Yuan
-
Patent number: 12205296Abstract: The implementation of the present disclosure provides a point cloud partitioning method and device, and a computer-readable storage medium, including: when performing stripe division along the longest side, adjusting an initial partitioning position or determining the stripe division length according to the size of a preset block, to obtain a stripe division position, wherein, the length of the first n?1 stripes along the longest side is an integer multiple of the side length of the preset block, n is the number of divided stripes, and n is a positive integer greater than or equal to 2.Type: GrantFiled: November 3, 2023Date of Patent: January 21, 2025Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventors: Wei Zhang, Fuzheng Yang, Shuai Wan, Yanzhuo Ma, Junyan Huo, Sujun Zhang, Zexing Sun
-
Patent number: 12206965Abstract: The present invention relates to enhancing the image capturing devices, and to enhance the image producing devices, in such a way that they can capture and produce 3D images without the need of any gear to be worn by the viewers. The proposed solution is inspired by ray optics, geometry, mirrors, diamond cuts, eyes, rods & cones of human eyes, and retina design of human eyes. It doesn't “trick” the eyes and brain, it doesn't manipulate the images like current technologies do to give the 3D effect. The invention describes how the light sensors and light emitters, and their layouts to be changed in imaging devices like cameras and TV screens, to capture and produce 3D images, whether or not the directional information can be captured by image sensors and emitters.Type: GrantFiled: April 18, 2020Date of Patent: January 21, 2025Inventors: Vibha Sharma, Nishant Sharma
-
Patent number: 12205219Abstract: A system, method or compute program product for generating stereoscopic images. One of the methods includes identifying, in a first three-dimensional coordinate system of a first three-dimensional virtual environment, a location and orientation of a first virtual object that is a virtual stereoscopic display object; identifying an eyepoint pair in the first virtual environment; identifying, in a second three-dimensional coordinate system of a second three-dimensional virtual environment, a location and orientation of a second virtual object that is in the second virtual environment; for each eyepoint of the eyepoint pair, rendering an inferior image of the second virtual object; for each eyepoint of the eyepoint pair, render a superior image of the first virtual environment, comprising rendering, in the superior image for each eyepoint, the corresponding inferior image onto the virtual stereoscopic display object; and display, on a physical stereoscopic display, the first virtual environment.Type: GrantFiled: May 12, 2023Date of Patent: January 21, 2025Assignee: Tanzle, Inc.Inventor: Michael T. Mayers
-
Patent number: 12196558Abstract: A sensor package and associated method provides roadside sensor-based data. The sensor package includes a controller, a network access device for remote communications, GNSS receiver, an inertial measurement unit (IMU), and environment sensors. The controller operates under software control and communicates with the other sensor package devices to: collect environment sensor data using the sensors; receive GNSS signals indicating current sensor package location; obtain IMU data; determine position/orientation data that includes position data and orientation data based on the orientation data is based on the IMU data; and send the environment sensor data and the position/orientation data of the sensor package to an external device using the network access device. The environment sensor data includes at least one sensor data frame, and the sensor data frame is matched with position/orientation data based on a time associated with collecting the sensor data frame and the position/orientation data.Type: GrantFiled: February 14, 2022Date of Patent: January 14, 2025Assignee: The Regents of the University of MichiganInventors: Tyler S. Worman, Gregory J. McGuire
-
Patent number: 12198376Abstract: Method for creating marker-based shared augmented reality (AR) session starts with initializing a shared AR session by a first device and by a second device. The first device displays on a display a marker. The second device detects the marker using a camera included in the second device and captures an image of the marker using the camera. The second device determines a transformation between the first device and the second device using the image of the marker. A common coordinate frame is then determined using the transformation, the shared AR session is generated using the common coordinate frame, and the shared AR session is caused to be displayed by the first device and by the second device. Other embodiments are described herein.Type: GrantFiled: May 6, 2023Date of Patent: January 14, 2025Assignee: Snap Inc.Inventors: Piers Cowburn, David Li, Isac Andreas Müller Sandvik, Qi Pan, Matan Zohar
-
Patent number: 12198263Abstract: Determining a fit of a real-world electronic gaming machine (EGM) using a virtual representation of an EGM includes obtaining the virtual representation, obtaining sensor data for a real-world environment in which the virtual representation is to be presented, and detecting one or more surfaces in the real-world environment based on the sensor data. A determination is made that the virtual representation of the electronic gaming machine fits within the real-world environment in accordance with the real-world dimensions and the detected one or more surfaces. The virtual representation is presented in a mixed reality environment such that the virtual representation is blended into a view of a real-world environment along the one or more surfaces.Type: GrantFiled: September 30, 2022Date of Patent: January 14, 2025Assignee: Aristocrat Technologies Australia Pty LimitedInventors: Upinder Dhaliwal, Eric Droukas, Patrick Petrella, III
-
Patent number: 12196888Abstract: A light shaping optic may include a substrate. The light shaping optic may include a structure disposed on the substrate, wherein the structure is configured to receive one or more input beams of light with a uniform intensity field and less than a threshold total intensity, and wherein the structure is configured to shape the one or more input beams of light to form one or more output beams of light with a non-uniform intensity field and less than the threshold total intensity.Type: GrantFiled: June 14, 2023Date of Patent: January 14, 2025Assignee: VIAVI Solutions Inc.Inventors: Scott Rowlands, Markus Bilger, William D. Houck
-
Patent number: 12200356Abstract: A virtual or augmented reality display system that controls power inputs to the display system as a function of image data. Image data itself is made of a plurality of image data frames, each with constituent color components of, and depth planes for displaying on, rendered content. Light sources or spatial light modulators to relay illumination from the light sources may receive signals from a display controller to adjust a power setting to the light source or spatial light modulator, and/or control depth of displayed image content, based on control information embedded in an image data frame.Type: GrantFiled: September 7, 2023Date of Patent: January 14, 2025Assignee: Magic Leap, Inc.Inventors: Jose Felix Rodriguez, Ricardo Martinez Perez, Reza Nourai
-
Patent number: 12189835Abstract: A method for controlling a warning system including a user device. The method includes, in the warning system: receiving a wireless signal transmitted towards a surface on which a user is located, of which the range defines, on the surface, a plurality of zones including a no-warning first zone and a warning second zone, a characteristic relating to the transmitted signal being able to take a second value corresponding to the second zone; upon detecting the characteristic having the second value, generating a warning signal, and transmitting the warning signal to a control unit of the warning system in order to notify a user of the system.Type: GrantFiled: December 9, 2019Date of Patent: January 7, 2025Assignee: ORANGEInventors: Sylvain Leroux, Thierry Gaillet
-
Patent number: 12189106Abstract: A microinstrument system comprises: a microinstrument, having at least one integrated optical fiber which has a distal end facing the object to be observed; a recording apparatus, to which light from the object to be observed can be supplied for recording image data with the aid of the at least one optical fiber; a determining device, which is designed to determine the positions of the distal end of the at least one optical fiber at the recording times of the particular image data; wherein a data-processing device, connected to the recording apparatus in order to receive the image data; is connected to the determining device in order to receive the position data; and is designed to compile the image data with the aid of the position data to form a two-dimensional or three-dimensional image.Type: GrantFiled: April 14, 2022Date of Patent: January 7, 2025Assignee: CARL ZEISS MEDITEC AGInventors: Delbert Peter Andrews, Christian Voigt, Carolin Klusmann, Matthias Hillenbrand, Max Riedel, Christian Marzi, Franziska Mathis-Ullrich, Fritz Hengerer
-
Patent number: 12192654Abstract: Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.Type: GrantFiled: April 24, 2024Date of Patent: January 7, 2025Assignee: Corephotonics Ltd.Inventors: Nadav Geva, Michael Scherer, Ephraim Goldenberg, Gal Shabtay
-
Patent number: 12188759Abstract: The specification provides a depth imaging method and device and a computer-readable storage medium. The method includes: controlling an emission module comprising a light emitting device to emit at least two speckle patterns that change temporally to a target object; controlling an acquisition module comprising a light sensor to acquire reflected speckle patterns of the at least two speckle patterns reflected by the target object; and performing spatial-temporal stereo matching by using the reflected speckle patterns and the at least two reference speckle patterns, to calculate offsets of pixel points between speckles of the at least two reference speckle patterns and speckles of the reflected speckle patterns, and calculating depth values of the pixel points according to the offsets.Type: GrantFiled: June 1, 2022Date of Patent: January 7, 2025Assignee: Orbbec Inc.Inventors: Yuhua Xu, Zhenzhong Xiao, Bin Xu, Yushan Yu
-
Patent number: 12189037Abstract: A three-dimensional point cloud generation method for generating a three-dimensional point cloud including one or more three-dimensional points includes: obtaining (i) a two-dimensional image obtained by imaging a three-dimensional object using a camera and (ii) a first three-dimensional point cloud obtained by sensing the three-dimensional object using a distance sensor; detecting, from the two-dimensional image, one or more attribute values of the two-dimensional image that are associated with a position in the two-dimensional image; and generating a second three-dimensional point cloud including one or more second three-dimensional points each having an attribute value, by performing, for each of the one or more attribute values detected, (i) identifying, from a plurality of three-dimensional points forming the first three-dimensional point cloud, one or more first three-dimensional points to which the position of the attribute value corresponds, and (ii) appending the attribute value to the one or more fiType: GrantFiled: May 19, 2020Date of Patent: January 7, 2025Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Pongsak Lasang, Chi Wang, Zheng Wu, Sheng Mei Shen, Toshiyasu Sugio, Tatsuya Koyama
-
Patent number: 12183063Abstract: In examples, image data representative of an image of a field of view of at least one sensor may be received. Source areas may be defined that correspond to a region of the image. Areas and/or dimensions of at least some of the source areas may decrease along at least one direction relative to a perspective of the at least one sensor. A downsampled version of the region (e.g., a downsampled image or feature map of a neural network) may be generated from the source areas based at least in part on mapping the source areas to cells of the downsampled version of the region. Resolutions of the region that are captured by the cells may correspond to the areas of the source areas, such that certain portions of the region (e.g., portions at a far distance from the sensor) retain higher resolution than others.Type: GrantFiled: June 30, 2020Date of Patent: December 31, 2024Assignee: NVIDIA CorporationInventors: Haiguang Wen, Bernhard Firner, Mariusz Bojarski, Zongyi Yang, Urs Muller
-
Patent number: 12178685Abstract: A dental scanning system comprises an intraoral scanner and one or more processors. The intraoral scanner comprises one or more light projectors configured to project a pattern (comprising a plurality of pattern features) on a dental object, and two or more cameras configured to acquire sets of images, each comprising at least one image from each camera. The processor(s) are configured to determine a correspondence between pattern features in the pattern of light and image features in each set of images by determining intersections of projector rays corresponding to one or more of the plurality of pattern features and camera rays corresponding to the one or more image features in three-dimensional (3D) space based on calibration data that associates the camera rays corresponding to pixels on the camera sensor of each of the two or more cameras to the projector rays.Type: GrantFiled: January 19, 2024Date of Patent: December 31, 2024Assignee: Align Technology, Inc.Inventors: Ofer Saphier, Yossef Atiya, Arkady Rudnitsky, Nir Makmel, Sergei Ozerov, Tal Verker, Tal Levy
-
Patent number: 12184827Abstract: Method and arrangements for obtaining and associating 2D image data with 3D image data are provided. the image data being based on light triangulation, performed by an imaging system, where a measure object with first light and an image sensor senses reflected first light from a measure object, during a first exposure period resulting in a first image with first intensity peaks occurring at first sensor positions, SP1. The measure object is also illuminated with second light(s) and any reflected second light is sensed from the measure object by the image sensor during third exposure period(s) resulting in one or more third images. For respective first sensor position, SP1, in the first image it is selected a respective third sensor position, SP3, in said one or more third images.Type: GrantFiled: January 19, 2023Date of Patent: December 31, 2024Assignee: SICK IVP ABInventors: Anders Murhed, Mattias Johannesson
-
Patent number: 12179755Abstract: A trajectory generation device 1 determines first to third predicted trajectories by using model formulas (1) to (20) that model a trajectory of a host vehicle 3 on the basis of peripheral information of an information detection device 4, calculates a risk potential R_p_mi_vj_v_r by using the first to third predicted trajectories, and determines moving average values P_c1 to P_c3 so that the risk potential R_p_mi_vj_v_r decreases. In a case where the moving average values P_c1 and P_c2 have the same sign, the first predicted trajectory is determined while reflecting the moving average value P_c2 for correcting the second predicted trajectory.Type: GrantFiled: March 26, 2022Date of Patent: December 31, 2024Assignee: HONDA MOTOR CO., LTD.Inventor: Yuji Yasui
-
Patent number: 12174300Abstract: A method of measuring a distance between a vehicle and one or more objects, includes generating a modulation signal; generating a modulated light emitting diode (LED) transmission signal, via a vehicle LED driver assembly; transmitting a plurality of light beams based at least in part on the generated modulated LED transmission signal; capturing a reflection of the plurality of light beams off the one or more objects, utilizing one or more lens assemblies and a camera, the camera including an array of pixel sensors and being positioned on the vehicle; communicating a series of measurements representing the captured plurality of light beam reflections; calculating, utilizing the time-of-flight sensor module, time of flight measurements between the vehicle LED light assembly and the one or more objects and calculating distances, utilizing a depth processor module, between the vehicle LED light assembly and the one or more objects based on the time-of-flight measurements.Type: GrantFiled: July 21, 2024Date of Patent: December 24, 2024Assignee: Wireless Photonics, LLCInventors: Bahram Jalali, Alexandra Jalali, Mehdi Hatamian, Ahmadreza Rofougaran
-
Patent number: 12169370Abstract: The invention is directed to a device for exposure control in photolithographic direct exposure processes for two-dimensional structures in photosensitive coatings and to a method for converting registration data into direct exposure data. The object of the invention, to find an improved exposure control in direct exposure methods for two-dimensional structures in photosensitive layers which permits a registration of target marks independent from defined locations of the target marks, is met according to the invention in that a plurality of entocentric cameras are arranged in a registration unit (1) in linear alignment transverse to the one-dimensional movement of the substrate (2) to form a gapless linear scanning area (23) over a predetermined width of the substrate (2).Type: GrantFiled: September 15, 2021Date of Patent: December 17, 2024Assignee: Laser Imaging Systems GmbHInventors: Christian Schwarz, Jonas Burghoff, Stefan Heinemann, Holger Wagner, Steffen Rücker, Frank Jugel
-
Patent number: 12169954Abstract: Method and computing device for inferring via a neural network environmental data of an area of a building based on visible and thermal images of the area. A predictive model generated by a neural network training engine is stored by the computing device. The computing device determines a visible image of an area based on data received from at least one visible imaging camera. The computing device determines a thermal image of the area based on data received from at least one thermal imaging device. The computing device executes a neural network inference engine, using the predictive model for inferring environmental data based on the visible image and the thermal image. The inferred environmental data comprise geometric characteristic(s) of the area, an occupancy of the area, a human activity in the area, temperature value(s) for the area, and luminosity value(s) for the area.Type: GrantFiled: April 6, 2018Date of Patent: December 17, 2024Assignee: DISTECH CONTROLS INC.Inventor: Francois Gervais
-
Patent number: 12151385Abstract: A measurement system includes a multi-axis robot, a measurement unit coupled to the multi-axis robot, and a data processing apparatus, wherein the measurement unit includes one or more imaging devices movable with respect to a reference position of the multi-axis robot, and a position specification device for specifying a position of one or more of the imaging devices with respect to the reference position, wherein the data processing apparatus includes an acquisition part for acquiring a plurality of pieces of captured image data generated by having one or more of the imaging devices capture images at two or more positions, and a measurement part for measuring a distance between the plurality of feature points in a workpiece on the basis of a position of the feature point of the workpiece included in the plurality of pieces of captured image data.Type: GrantFiled: August 23, 2022Date of Patent: November 26, 2024Assignee: MITUTOYO CORPORATIONInventor: Naoki Mitsutani
-
Patent number: 12154528Abstract: A display device comprising: a display panel to display a frame having a resolution up to a first frame rate; a plurality of image processing units; and a processor. The processor configured to, based on change from a first mode of displaying a content, among a plurality of modes of the display device, to a second mode of displaying plural contents, change frame rates of a plurality of contents output from the plurality of image processing units to a same frame rate, mix the plurality of contents with the frame rates changed to the same frame rate, and change a vertical resolution of the mixed content. The display panel is enabled to output the mixed content, at a second frame rate greater than the first frame rate.Type: GrantFiled: March 16, 2023Date of Patent: November 26, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Jihye Sim