Picture Signal Generator Patents (Class 348/46)
  • Patent number: 11663696
    Abstract: In one embodiment, a computing system may determine, a predicted eye position of a viewer corresponding to a future time moment for displaying a frame. The system may generate a first correction map for the frame based on the predicted eye position of the viewer. The system may retrieve one or more second correction maps used for correcting one or more proceeding frames. The system may generate a third correction map based on the first correction map generated based on the predicted eye position of the viewer and the one or more second correction maps used for correcting the one or more proceeding frames. The system may adjust pixel values of the frame based at least on the third correction map. The system may output the frame with the adjusted pixel values to a display.
    Type: Grant
    Filed: June 22, 2022
    Date of Patent: May 30, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Thomas Scott Murdison, Romain Bachy, Edward Buckley, Bo Zhang
  • Patent number: 11644432
    Abstract: Methods of determining thermal properties of a contact region are provided. The method comprises receiving temperature data of a sensor; determining a temperature distribution of heat penetration from a sensor to at least one material; applying a correction to the temperature distribution; iteratively analyzing the corrected temperature distribution; and outputting thermal properties of a contact region, the contact region being a region between the sensor and the at least one material. The method may further comprise determining thermal properties of the at least one material; and determining corrected thermal properties of the material using the thermal properties of the contact region. The method may further comprise automatically determining an appropriate time window for measuring properties of the at least one material to minimize effects of the contact region.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: May 9, 2023
    Assignee: Thermtest, Inc.
    Inventors: Dale Hume, David Landry, Andrew Evans
  • Patent number: 11643037
    Abstract: A safe exiting assistance system includes an object detector that detects an object which approaches from a rear side of a vehicle, a body detector that detects physical information of a passenger in the vehicle, a door opening/closing detector that detects door opening or closing information of the vehicle, a determiner that determines whether a situation requires alert to the passenger in response to detection results of the object detector and the door opening/closing detector and calculates a value sensed by the body detector in response to vehicle information set in advance, and a controller that differently controls a position and an irradiation angle of a light source in response to the physical information when information in which the alert is necessary is received from the determiner.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: May 9, 2023
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventors: Gyun Ha Kim, Eung Hwan Kim, Sang Kyung Seo, Dae Yun An
  • Patent number: 11638997
    Abstract: A positioning and navigation method for a robot includes: receiving point cloud information of a current position and information of a target position sent by a robot, wherein the point cloud information of the current position is updated in real time; determining the current position of the robot according to the point cloud information; planning a plurality of motion paths according to the current position and the target position; searching for an optimal path among the plurality of motion paths by a predetermined heuristic search algorithm; and sending the current position of the robot and the optimal path to the robot, such that the robot moves according to the optimal path.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: May 2, 2023
    Assignee: CLOUDMINDS (BEIJING) TECHNOLOGIES CO., LTD.
    Inventor: Lianzhong Li
  • Patent number: 11637954
    Abstract: A camera (10) that has four imaging means (12), each arranged to capture a different field of view and each associated with a separate sensor or detector (20). Each imaging means (12) is tilted to capture light from a zenith above the camera. The four imaging means (12) are equally spaced around a central axis, so that each contributes around a quarter of the scene being imaged.
    Type: Grant
    Filed: October 5, 2012
    Date of Patent: April 25, 2023
    Assignee: NCTECH LTD
    Inventors: Neil Tocher, Cameron Ure
  • Patent number: 11637977
    Abstract: Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: April 25, 2023
    Assignee: Corephotonics Ltd.
    Inventors: Nadav Geva, Michael Scherer, Ephraim Goldenberg, Gal Shabtay
  • Patent number: 11629949
    Abstract: Aspects of the present disclosure relate to systems and methods for structured light depth systems. An example active depth system may include a receiver to receive reflections of transmitted light and a transmitter including one or more light sources to transmit light in a spatial distribution. The spatial distribution of transmitted light may include a first region of a first plurality of light points and a second region of a second plurality of light points. A first density of the first plurality of light points is greater than a second density of the second plurality of light points when a first distance between a center of the spatial distribution and a center of the first region is less than a second distance between the center of the spatial distribution and the center of the second region.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: April 18, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Albrecht Johannes Lindner, Kalin Mitkov Atanassov, Stephen Michael Verrall
  • Patent number: 11625845
    Abstract: A depth measurement assembly (DMA) includes an illumination source that projects pulses of light (e.g., structured light) at a temporal pulsing frequency into a local area. The DMA includes a sensor that capture images of the pulses of light reflected from the local area and determines, using one or more of the captured images, one or more TOF phase shifts for the pulses of light. The DMA includes a controller coupled to the sensor and configured to determine a first set of estimated radial distances to an object in the local area based on the one or more TOF phase shifts. The controller determines a second estimated radial distance to the object based on an encoding of structured light and at least one of the captured images. The controller selects an estimated radial distance from the first set of radial distances.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: April 11, 2023
    Assignee: META PLATFORMS TECHNOLOGIES, LLC
    Inventor: Michael Hall
  • Patent number: 11625842
    Abstract: An image processing apparatus includes: a model pattern storage unit that stores a model pattern composed of a plurality of model feature points; an image data acquisition unit that acquires a plurality of images obtained through capturing an object to be detected; an object detection unit that detects the object to be detected from the images using the model pattern; a model pattern transformation unit that transforms a position and posture such that the model pattern is superimposed on an image of the object to be detected; a corresponding point acquisition unit that acquires a corresponding point on image data corresponding to each of the model feature points; a corresponding point set selection unit that selects a set of corresponding points on the plurality of images; and a three-dimensional position calculation unit that calculates a three-dimensional position of the image of the object to be detected.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: April 11, 2023
    Assignee: FANUC CORPORATION
    Inventor: Yuta Namiki
  • Patent number: 11619755
    Abstract: A method and system for calibrating a PET scanner are described. The PET scanner may have a field of view (FOV) and multiple detector rings. A detector ring may have multiple detector units. A line of response (LOR) connecting a first detector unit and a second detector unit of the PET scanner may be determined. The LOR may correlate to coincidence events resulting from annihilation of positrons emitted by a radiation source. A first time of flight (TOF) of the LOR may be calculated based on the coincidence events. The position of the radiation source may be determined. A second TOF of the LOR may be calculated based on the position of the radiation source. A time offset may be calculated based on the first TOF and the second TOF. The first detector unit and the second detector unit may be calibrated based on the time offset.
    Type: Grant
    Filed: May 25, 2020
    Date of Patent: April 4, 2023
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Xinyu Lyu, Qixiang Zhang, Wenbing Song, Zijun Ji, Weiping Liu
  • Patent number: 11620966
    Abstract: A driving method, suitable for a multimedia system including a head-mounted device (HMD), includes the following operations: retrieving human factor data from a storage device, a radio signal, or an image; and according to the human factor data, automatically adjusting software for driving the HMD or hardware components of the multimedia system.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: April 4, 2023
    Assignee: HTC Corporation
    Inventors: Jian-Zhi Tseng, Wan-Hsieh Liu, Yun-Ting Chen
  • Patent number: 11611738
    Abstract: A 2D/3D conversion interface component is configured to override the video processing capabilities associated with a conventional 2D display, re-formatting an incoming 3D video stream into a version compatible with a 2D display while preserving the 3D-type of presentation. An incoming “side-by-side” (SBS) 3D video stream is re-formatted into a “frame sequential” (serialized) format that appears as a conventional video stream input to the 2D display. The interface component also generates as an output a timing signal (synchronized with the converted frames) that is transmitted to a 3D viewing device (e.g., glasses). Therefore, as along as the 3D viewing device remains synchronized with the sequence of frames shown on the 2D display, the user will actually be viewing an interactive 3D video.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: March 21, 2023
    Assignee: Saras-3D, Inc.
    Inventors: Bipin D. Dama, Soham Pathak, Ankita Shastri, Kalpendu Shastri
  • Patent number: 11600010
    Abstract: A time-of-flight camera for generating a depth map indicating distance(s) to target(s) includes a processor and multi-pixel time-of-flight image sensor. The camera: (a) determines, at each pixel, a plurality of phase-correlation values associated with at least one exposure duration and at least one phase offset; (b) determines for each pixel an accumulated correlation in response to the plurality of phase-correlation values; and (c) generates the depth map in response to a plurality of the accumulated correlations. A set of accumulated correlations may be determined in response to a plurality of sets of phase-correlation values such that each accumulated correlation is associated with one unique phase offset in response to each set of phase-correlation values being associated with one exposure duration, the depth map being generated in response to a plurality of sets of accumulated correlations. A computer-implemented method of generating a depth map using a time-of-flight image sensor is provided.
    Type: Grant
    Filed: June 3, 2020
    Date of Patent: March 7, 2023
    Assignee: Lucid Vision Labs, Inc.
    Inventors: Colin R. Doutre, Zicong Mai, Jeffrey D. Bull, Roderick A. Barman
  • Patent number: 11587451
    Abstract: A VR education system includes a teaching plan storage unit that stores the teaching plan for each lecture including lecture information and at least one keyword and key image, an instructor evaluation unit that converts the instructor's voice data, input through the voice module of the instructor's terminal, into text data by speech-to-text processing and compares the converted data with the teaching plan data stored in the teaching plan storage unit, an instructor recommendation unit that generates an instructor list in the order of matching rate, from highest to lowest, based on the matching rates calculated by the instructor evaluation unit, a lecture providing unit that provides the lecture of an instructor, selected from the instructor list from a trainee terminal, to the trainee terminal, and a lecture monitoring unit that monitors lectures in progress to detect any event that may occur during the lecture.
    Type: Grant
    Filed: November 26, 2020
    Date of Patent: February 21, 2023
    Inventor: Joung-Ho Seo
  • Patent number: 11587259
    Abstract: An apparatus includes an interface and a processor. The interface may be configured to receive pixel data representing respective fields of view of two or more cameras arranged to obtain a predetermined field of view, where the respective fields of view of each adjacent pair of the two or more cameras overlap. The processor may be configured to process the pixel data arranged as video frames and perform a fixed pattern calibration for facilitating multi-view stitching. The fixed pattern calibration may comprise applying a pose calibration process to the video frames. The pose calibration process generally uses (i) intrinsic parameters, a respective translate vector, a respective rotation matrix, and distortion parameters for each lens of the two or more cameras and (ii) a calibration board to obtain configuration parameters for the respective fields of view of the two or more cameras.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: February 21, 2023
    Assignee: Ambarella International LP
    Inventors: Jian Tang, Qi Feng
  • Patent number: 11588979
    Abstract: Systems and methods for capturing measurements of images of an object to be measured using a mobile electronic device are disclosed. A method may include capturing a measurement image of a measured object within an observation region of a camera of the device, displaying a light-emitted image on a screen of the device, causing the screen to successively display multiple illumination images of a predefined illumination image sequence, and causing the screen of the device to display one or more of the illumination images of the predefined illumination images. The predefined illumination image sequence is set via a user interface of the device and a selection between different predefined illumination image sequences is rendered using the user interface. A control unit is configured to select a predefined illumination image sequence between several stored predefined illumination image sequences in dependence on a selection of the measured object of interest and/or a characteristic of interest.
    Type: Grant
    Filed: February 5, 2021
    Date of Patent: February 21, 2023
    Assignee: Fraunhofer-Gesellschaft zur Förderung derr angewandten Forschung e.V.
    Inventors: Udo Seiffert, Andreas Herzog, Andreas Backhaus
  • Patent number: 11582393
    Abstract: An imaging system may include a housing having shape and size sufficient to receive an industrial tool inserted into the housing. The imaging system may further include a plurality of cameras and a plurality of light sources positioned within the housing in a manner to surround the industrial tool upon insertion of the industrial tool into the housing. The imaging system may include a processing unit to control operation of the cameras and light sources and adjust relative positions of the cameras and light sources in relation to the industrial tool to capture a plurality of images of relevant portions of the industrial tool. The plurality of images collectively reveals substantially all of the relevant portions of the industrial tool. A method and computer-readable medium are also disclosed.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: February 14, 2023
    Assignee: Teradata US, Inc.
    Inventors: Nathan Zenero, Eric van Oort, Ysabel Witt-Doerring, Jacob Lubecki
  • Patent number: 11581357
    Abstract: A depth sensor includes a first pixel including a plurality of first photo transistors each receiving a first photo gate signal, a second pixel including a plurality of second photo transistors each receiving a second photo gate signal, a third pixel including a plurality of third photo transistors each receiving a third photo gate signal, a fourth pixel including a plurality of fourth photo transistors each receiving a fourth photo gate signal, and a photoelectric conversion element shared by first to fourth photo transistors of the plurality of first to fourth photo transistors.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: February 14, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Younggu Jin, Youngchan Kim, Taesub Jung, Yonghun Kwon, Moosup Lim
  • Patent number: 11579298
    Abstract: The present exemplary embodiments provide a hybrid sensor, a Lidar sensor, and a moving object which generate composite data by mapping distance information on an obstacle obtained through the Lidar sensor to image information on an obstacle obtained through an image sensor and predict distance information of composite data based on intensity information of a pixel, to generate precise composite data.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: February 14, 2023
    Assignee: YUJIN ROBOT CO., LTD.
    Inventors: Kyung Chul Shin, Seong Ju Park, Jae Young Lee, Moo Woong Cheon, Man Yeol Kim
  • Patent number: 11580696
    Abstract: A surveying data processing device includes a point cloud data acquiring unit, a three-dimensional model acquiring unit, a first correspondence relationship determining unit, an extended three-dimensional data generating unit, and a second correspondence relationship determining unit. The point cloud data acquiring unit acquires first point cloud data obtained by laser scanning, at a first viewpoint, and acquires second point cloud data obtained by laser scanning, at a second viewpoint. The three-dimensional model acquiring unit acquires data of a three-dimensional model. The first correspondence relationship determining unit obtains a correspondence relationship between the first point cloud data and the three-dimensional model. The extended three-dimensional data generating unit generates extended three-dimensional data in which the first point cloud data is extended, on the basis of the correspondence relationship.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: February 14, 2023
    Assignee: Topcon Corporation
    Inventor: You Sasaki
  • Patent number: 11568517
    Abstract: An electronic apparatus according to the present invention, includes at least one memory and at least one processor which function as: an acquisition unit configured to acquire positional information indicating a position of an object in a captured image; a display control unit configured to perform control such that an item having a length in a first direction, which corresponds to a range in a depth direction in the image, is displayed in a display, and a graphic indicating presence of the object is displayed in association with a position corresponding to the positional information in the item; a reception unit configured to be able to receive an operation of specifying a set range which is at least part of the item; and a processing unit configured to perform predetermined processing based on the set range.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: January 31, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Hironori Kaida
  • Patent number: 11567205
    Abstract: An object monitoring system includes a distance measuring device which generates, while repeating an imaging cycle having different imaging modes, a distance image of a target space for each of the imaging modes, and a computing device which determines presence or absence of an object within a monitoring area set in the target space based on the distance image.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: January 31, 2023
    Assignee: FANUC CORPORATION
    Inventors: Minoru Nakamura, Yuuki Takahashi, Atsushi Watanabe
  • Patent number: 11561084
    Abstract: Methods, devices and systems provide improved detection, sensing and identification of objects using modulated polarized beams. An example polarization sensitive device includes an illumination source, and a modulator coupled to the illumination source to produce output beams in which polarization states or polarization parameters of the output beams are modulated to produce a plurality of modulated polarized beams. The device further includes a polarization sensitive detector positioned to receive a reflected portion of modulated polarized beams after reflection from an object and to produce information that is indicative of modulation and polarization states of the received beams. The information can be used to enable a determination of a distance between the polarization sensitive device and the object, or a determination of a polarization-specific characteristic of the object.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: January 24, 2023
    Assignee: ARIZONA BOARD OF REGENTS ON BEHALF OF THE UNIVERSITY OF ARIZONA
    Inventor: Stanley Pau
  • Patent number: 11561581
    Abstract: A virtual display apparatus includes: a flexible display component layer having a light exit surface and a non-light exit surface opposite to the light exit surface; a lens layer disposed at the light exit surface of the flexible display component layer, and configured to converge light; a curvature adjustment layer disposed on the non-light exit surface of the flexible display component layer. The lens layer has a first surface facing the flexible display component layer, and the first surface of the lens layer and the light exit surface of the flexible display component layer have a gap therebetween. The curvature adjustment layer is configured to deform in response to at least one deformation signal, so as to adjust distances between the first surface of the lens layer and different positions of the light exit surface of the flexible display component layer along a thickness direction of the lens layer.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: January 24, 2023
    Assignees: BEIJING BOE DISPLAY TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Mingchao Wang, Xuewei Zhang
  • Patent number: 11557092
    Abstract: Various examples are provided related to systems and processes for generating verified wireframes corresponding to at least part of a structure or element of interest can be generated from 2D images, 3D representations (e.g., a point cloud), or a combination thereof. The wireframe can include one or more features that correspond to a structural aspect of the structure or element of interest. The verification can comprise projecting or overlaying the generated wireframe over selected 2D images and/or a point cloud that incorporates the one or more features. The wireframe can be adjusted by a user and/or a computer to align the 2D images and/or 3D representations thereto, thereby generating a verified wireframe including at least a portion of the structure or element of interest. The verified wireframes can be used to generate wireframe models, measurement information, reports, construction estimates or the like.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: January 17, 2023
    Assignee: Pointivo, Inc.
    Inventors: Habib Fathi, Daniel L. Ciprari, William Wilkins, Jr., Iven Connary
  • Patent number: 11553164
    Abstract: A light projection system includes a microelectromechanical (MEMS) mirror configured to operate in response to a mirror drive signal and to generate a mirror sense signal as a result of the operation. A mirror driver is configured to generate the mirror drive signal in response to a drive control signal. A zero cross detector is configured to detect zero crosses of the mirror sense signal. A controller is configured to generate the drive control signal as a function of the detected zero crosses of the mirror sense signal.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: January 10, 2023
    Assignees: STMicroelectronics Ltd, STMicroelectronics S.r.l.
    Inventors: Massimo Ratti, Eli Yaser, Naomi Petrushevsky, Yotam Nachmias
  • Patent number: 11544919
    Abstract: Drone inspection of an undifferentiated surface using a reference image is disclosed. A plurality of images of a surface of a structure are analyzed to identify at least one image that depicts a feature in a portion of the surface based on a feature criterion, the plurality of images being generated by a drone comprising a camera, each image depicting a corresponding portion of the surface, and at least some of the images depicting the corresponding portion of the surface and a portion of a reference image. A location on the surface that corresponds to the at least one image is determined based on a depiction of the reference image in an image of the plurality of images.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: January 3, 2023
    Assignee: PRECISIONHAWK, INC.
    Inventors: Matthew E. Tompkins, John P. Cannon, Jr., Richard P. Statile
  • Patent number: 11539934
    Abstract: An image display method used by an image surveillance system is provided. The image surveillance system includes a plurality of cameras. The camera is configured to shoot a part of a physical environment to form a real-time image. The image display method has the following steps: First, a three-dimensional space model corresponding to the physical environment is established. Next, based on the height, the shooting angle and the focal length of the camera, a corresponding viewing frustum is established for each of the camera. According to the viewing frustum, a shooting coverage area of the camera in the physical environment is obtained. Next, a virtual coverage area corresponding to the shooting coverage area is searched out in the three-dimensional space model. Next, the real-time image is imported to a three-dimensional space model and projected to the virtual coverage area.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: December 27, 2022
    Inventor: Syuan-Pei Chang
  • Patent number: 11537200
    Abstract: Provided are a human-machine interaction method, a system, a processing device and a computer readable storage medium, wherein the method includes: controlling a 3D display to output a 3D view to present a virtual target object; receiving a user image taken by a sight tracking camera, and detecting an eye gaze region according to the user image; receiving a hand image taken by a gesture detection camera, and detecting whether a user's hand collides with the virtual target object and/or grabs the virtual target object according to the hand image; controlling playing of the 3D view according to whether human eyes gaze at the virtual target object, the user's hand collides with the virtual target, and grasps the virtual target object.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: December 27, 2022
    Assignees: Beijing BOE Optoelectronics Technology Co., Ltd., BOE Technology Group Co., Ltd.
    Inventors: Guixin Yan, Jiankang Sun, Lili Chen, Hao Zhang
  • Patent number: 11528436
    Abstract: An imaging apparatus includes a first photoelectric conversion unit configured to convert light into charge, a second photoelectric conversion unit configured to convert light into charge, and a comparison unit. The comparison unit includes a first transistor and a second transistor. The first transistor receives a signal that is based on the charge converted by the first photoelectric conversion unit. The second transistor receives a signal that is based on the charge converted by the second photoelectric conversion unit.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: December 13, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Masanori Tanaka
  • Patent number: 11525918
    Abstract: The disclosure relates to a time-of-flight camera comprising: a time-of-flight sensor having several time-of-flight pixels for determining a phase shift of emitted and captured light, distance values being determined in accordance with the detected phase shifts, characterised in that the time-of-flight camera has a memory in which parameters of a point spread function, which characterise the time-of-flight camera and the time-of-flight sensor, are stored; an evaluation unit which is designed to deploy a detected complex-valued image in Fourier space, in accordance with the stored point-spread function, and a complex-valued image corrected by diffused light is determined and the phase shifts or distance values are determined using the corrected complex-valued image.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: December 13, 2022
    Assignee: PMDTECHNOLOGIES AG
    Inventors: Stephan Ulrich, Lutz Heyne
  • Patent number: 11520024
    Abstract: Extrinsic calibration of a Light Detection and Ranging (LiDAR) sensor and a camera can comprise constructing a first plurality of reconstructed calibration targets in a three-dimensional space based on physical calibration targets detected from input from the LiDAR and a second plurality of reconstructed calibration targets in the three-dimensional space based on physical calibration targets detected from input from the camera. Reconstructed calibration targets in the first and second plurality of reconstructed calibration targets can be matched and a six-degree of freedom rigid body transformation of the LiDAR and camera can be computed based on the matched reconstructed calibration targets. A projection of the LiDAR to the camera can be computed based on the computed six-degree of freedom rigid body transformation.
    Type: Grant
    Filed: December 24, 2019
    Date of Patent: December 6, 2022
    Assignee: NIO Technology (Anhui) Co., Ltd.
    Inventors: Hiu Hong Yu, Tong Lin, Xu Chen, Zhenxiang Jian
  • Patent number: 11520332
    Abstract: An autonomous mobile device (AMD) uses sensors to explore a physical space and determine the locations of obstacles. Simultaneous localization and mapping (SLAM) techniques are used by the AMD to designate as keyframes some images and their associated descriptors of features in the space. Each keyframe indicates a location and orientation of the AMD relative to those features. Anchors are specified relative to keyframes. A marker is specified relative to one or more anchors. Because markers are associated with features in the physical space, they maintain their association with the physical space through various processes such as SLAM loop closures. Markers may specify locations in the physical space, such as navigation waypoints, navigation destinations such as a goal pose for exploring an unexplored area, as an observation target to facilitate exploration, and so forth. Markers may also be used to specify block listed locations to be avoided during exploration.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: December 6, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: James Charles Zamiska, David Allen Fotland, Roger Robert Webster, Mohit Deshpande, Robert Franklin Ebert, Nikhil Sharma, Rachel Liao, Chang Young Kim
  • Patent number: 11514625
    Abstract: Provided are a method for drawing a motion track, an electronic device and a storage medium. The method includes the following steps: multiple video frames are acquired, where the multiple video frames includes a current video frame and at least one video frame before the current video frame (S110); a display position of a target motion element in each video frame and a display position of a background pixel point in each video frame are identified (S120); a display position of the target motion element of each video frame in the current video frame is calculated (S130); an actual motion track of the target motion element is drawn in the current video frame according to the display position of the target motion element of each video frame in the current video frame, and the current video frame in which the drawing is completed is displayed (S140).
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: November 29, 2022
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventors: Shikun Xu, Yandong Zhu, Changhu Wang
  • Patent number: 11508257
    Abstract: There is described a method for controlling, in a display, a virtual image of a displaying object for which displaying is controllable. The method comprises providing a curved mirrored surface opposing the displaying object to produce the virtual image. The location of an observer is determined, with respect to the curved mirrored surface. A position of the virtual image can then be determined for the observer at the location, wherein this virtual image provides at least one of parallax and a stereoscopic depth cue. The displaying object is controlled to produce the virtual image as determined.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: November 22, 2022
    Assignee: 8259402 CANADA INC.
    Inventors: Jason Carl Radel, Fernando Petruzziello
  • Patent number: 11501486
    Abstract: A system for characterising surfaces in a real-world scene, the system comprising an object identification unit operable to identify one or more objects within one or more captured images of the real-world scene, a characteristic identification unit operable to identify one or more characteristics of one or more surfaces of the identified objects, and an information generation unit operable to generate information linking an object and one or more surface characteristics associated with that object.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: November 15, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Nigel John Williams, Fabio Cappello, Timothy Edward Bradley, Rajeev Gupta
  • Patent number: 11500510
    Abstract: An information processing apparatus includes a processor configured to: control a device that disposes a virtual image in real space; and dispose, in a case where the device disposes the virtual image in front of a physical display device with an external terminal including the physical display device connected to the information processing apparatus, a button for requesting an operation of the external terminal as the virtual image.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: November 15, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Kengo Tokuchi, Takahiro Okayasu
  • Patent number: 11498235
    Abstract: A processing system (10) and corresponding method (158) are provided for processing workpieces (WP), including food items, to cut and remove undesirable components from the food items and/or portion the food items while being conveyed on a conveyor system (12). An X-ray scanning station (14) is located on an upstream conveyor section (20) to ascertain size and/or shape parameters of the food items as well as the location of any undesirable components of the food items, such as bones, fat or cartilage. Thereafter the food items are transferred to a downstream conveyor (20) at which is located an optical scanner (102) to ascertain the size and/or shape parameters of the food items. The results of the X-ray and optical scanning are transmitted to a processor (18) to confirm that the food item scanned by the optical scanner is the same as that previously scanned by the X-ray scanner.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: November 15, 2022
    Assignees: John Bean Technologies Corporation, Nordischer Maschinenbau Rud. Baader GmbH + Co. KG
    Inventors: George R. Blaine, Jon A. Hocker, Alexander Steffens
  • Patent number: 11501489
    Abstract: An extended or cross reality system includes a computing device communicably connected to a plurality of portable electronic devices via a network component, a repository accessible by the computing device and the plurality of portable electronic devices, and a dense map merge component. The extended or cross reality system determines a representation for multiple portions of a 3D environment based at least in part upon on a set of dense maps received from the plurality of portable devices, wherein the set of dense maps is grouped into multiple subgroups based at least in part upon pose data pertaining to the set of dense maps or surface information in the set of dense maps. The extended or cross reality system storing the representation as at least a portion of a shared persistent dense map.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: November 15, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Yilun Cao, Mohan Babu Kandra, David Geoffrey Molyneaux, Daniel Olshansky, David Paul Pena, Frank Thomas Steinbrücker, Rafael Domingos Torres
  • Patent number: 11494997
    Abstract: An augmented reality (AR) system includes a display having a 2D display coordinate system and a controller. The controller receives from an image having a 3D AR coordinate system, recognizes a first object in the image, determines a first distance to a viewpoint, and receives an instruction to display a second object image on the display, which includes a 3D AR position of the second object with a depth coordinate indicating a second distance to the viewpoint. The controller retrieves dimensions of the second object, obtains a distance index for the camera, calculates a distance scaling factor, calculates a 2D display position for the 3D AR position, calculates display dimensions for the second object, generates a display object image by scaling the second object image to the display dimensions, and displays the display object image onto the display based on a comparison of the first distance and the second distance.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: November 8, 2022
    Assignee: TP Lab, Ine.
    Inventor: Chi Fai Ho
  • Patent number: 11494990
    Abstract: In a general aspect, a method can include receiving data defining an augmented reality (AR) environment including a representation of a physical environment, and changing tracking of an AR object within the AR environment between region-tracking mode and plane-tracking mode.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: November 8, 2022
    Assignee: Google LLC
    Inventors: Bryan Woods, Jianing Wei, Sundeep Vaddadi, Cheng Yang, Konstantine Tsotsos, Keith Schaefer, Leon Wong, Keir Banks Mierle, Matthias Grundmann
  • Patent number: 11493908
    Abstract: An industrial safety zone configuration system leverages a digital twin of an industrial automation system to assist in configuring safety sensors for accurate monitoring of a desired detection zone. The system renders a graphical representation of the automation system based on the digital twin and allows a user to define a desired detection zone to be monitored as a three-dimensional volume within the virtual industrial environment. Users can define the locations and orientations of respective safety sensors as sensor objects that can be added to the graphical representation. Each sensor object has a set of object attributes representing configuration settings available on the corresponding physical sensor. The system can identify sensor configuration settings that will yield an estimated detection zone that closely conforms to the defined detection zone, and generate sensor configuration data based on these settings that can be used to configure the physical safety sensors.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: November 8, 2022
    Assignee: Rockwell Automation Technologies, Inc.
    Inventors: Chris John Yates, Maciej Stankiewicz, Jonathan Alexander, Chris Softley
  • Patent number: 11494991
    Abstract: Systems and methods of digital assistants in an augmented reality environment and local determination of virtual object placement are disclosed. Apparatuses of single or multi-directional lens as portals between a physical world and a digital world component of the augmented reality environment are also disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, to present, a digital assistant to the user. Responsive to receiving a command, the digital assistant can trigger an operation on the augmented reality environment such that the user is able to engage with the augmented reality environment via the user interface. The method can further include training the digital assistant to learn from the activities occurring in the augmented reality environment and/or behaviors of the user from the action or the interaction with the real world environment.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: November 8, 2022
    Assignee: Magical Technologies, LLC
    Inventors: Nova Spivack, Matthew Hoerl
  • Patent number: 11494916
    Abstract: A method for separating an image can include: acquiring a foreground pixel value and a background pixel value, where the foreground pixel value and the background pixel value are configured to separate a target area from an original image; acquiring a foreground geodesic distance and a background geodesic distance, where the foreground geodesic distance is a distance between a pixel value of each of pixel points and the foreground pixel value in the original image, and the background geodesic distance is a distance between a pixel value of each of the pixel points and the background pixel value; determining a transparency based on the foreground geodesic distance and the background geodesic distance; and separating the target area based on the transparency.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: November 8, 2022
    Assignee: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yonghang Yu, Congli Song
  • Patent number: 11496725
    Abstract: An image display device includes a display panel, a barrier panel, a light projecting unit, and a controller. The display panel is configured so as to include a first display region. The barrier panel is configured so as to include a first barrier region. The light projecting unit is configured so as to include a first light emitting region. The controller is configured so that a portion located in the first display region is displayed as one parallax image frame including two subframes, and configured so that a light quantity of light emitted from the first light emitting region is reduced during a frame change period including a timing of changing display from the parallax image frame to a new parallax image frame.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: November 8, 2022
    Assignee: KYOCERA Corporation
    Inventor: Kaoru Kusafuka
  • Patent number: 11490070
    Abstract: Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can be addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the invisible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an “IR-G-UV” camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: November 1, 2022
    Assignee: Google LLC
    Inventors: Tianfan Xue, Jian Wang, Jiawen Chen, Jonathan Barron
  • Patent number: 11481025
    Abstract: There is provided a display control apparatus including a control section configured to control an operation on an object in a position corresponding to an operating position recognized on the basis of a relation between multiple operating lines each displayed corresponding to an indicator body. The apparatus allows the object to be operated with as little motion as possible.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: October 25, 2022
    Assignee: SONY GROUP CORPORATION
    Inventors: Itaru Shimizu, Yuichi Miyagawa, Takanobu Omata, Kaoru Koike, Hisataka Izawa
  • Patent number: 11475641
    Abstract: A head-mounted device (HMD) is structured to include at least one computer vision camera that omits an IR light filter. Consequently, this computer vision's sensor is able to detect IR light, including IR laser light, in the environment. The HMD is configured to generate an image of the environment using the computer vision camera. This image is then fed as input into a machine learning (ML) algorithm that identifies IR laser light, which is detected by the sensor and which is recorded in the image. The HMD then visually displays a notification comprising information corresponding to the detected IR laser light.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: October 18, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds
  • Patent number: 11477260
    Abstract: Embodiments of systems and methods for transmission of audio information are disclosed herein. In one example, a System on Chip (SoC) includes a wired transceiver module, a wireless module, a Frequency Modulation (FM) demodulation module, and an audio information codec module operatively coupled to the wired transceiver and the FM demodulation module. The wired transceiver module is configured to receive a data packet corresponding to first audio information. The wireless module is configured to receive an FM signal, corresponding to second audio information. The FM demodulation module is configured to output the second audio information based on demodulating the FM signal. The audio information codec module is configured to decode the first audio information and the second audio information based on the data packet and the demodulated FM signal, respectively.
    Type: Grant
    Filed: May 25, 2020
    Date of Patent: October 18, 2022
    Assignee: BESTECHNIC (SHANGHAI) CO., LTD.
    Inventors: Weifeng Tong, Liang Zhang, Binbin Guo, Lu Chai, Hua Zeng, Zhichen Tu, Xinwei Li, Wenyu Xiao
  • Patent number: 11473921
    Abstract: A method for supporting a user of a first vehicle to follow a second vehicle includes obtaining a picture, by a camera, of at least a part of a surrounding of the first vehicle, detecting vehicles in the obtained picture, displaying a representation of the detected vehicles on a user interface to the user, obtaining input from the user from the user interface of which of the detected vehicles that is the second vehicle and that the user would like to follow, obtaining, via the camera, at least one identification data of the second vehicle, tracking the position of the second vehicle, and transmitting the position of the second vehicle to a navigation system of the first vehicle.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: October 18, 2022
    Assignee: NINGBO GEELY AUTOMOBILE RESEARCH & DEVELOPMENT CO.
    Inventors: Magnus Nilsson, Måns Pihlsgård