Picture Signal Generator Patents (Class 348/46)
  • Patent number: 11346085
    Abstract: An obstacle detection device capable of appropriately setting an area for detecting an obstacle in the vicinity of a machine body of a construction machine in consideration of the state of inclination of the construction machine with respect to its surrounding ground area includes an inclination information acquisition section to obtain inclination information of the machine body with respect to a surrounding ground area in the vicinity of the construction machine, and an area setting section to set a monitoring area for detecting an obstacle, and to modify the monitoring area according to the inclination information.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: May 31, 2022
    Assignee: KOBELCO CONSTRUCTION MACHINERY CO., LTD.
    Inventors: Sho Fujiwara, Tomofumi Okada
  • Patent number: 11349843
    Abstract: Systems, methods, devices for extending capabilities of an existing application to augment learning of a device in training are disclosed. The method includes: creating on a user device, an authorized session for running a local service application on the user device within an environment of an existing application which runs on a system remote from the user device; displaying a home screen menu which is personalized for a user, selection options to launch the local service application to execute within the existing application to provide extended functionalities to the existing application to augment learning of a device in training, wherein the selection options enable the user device to perform one or more task on the another device, include: resuming activities of a recent task previously performed, updating user's credentials through the training or learning, and starting a new task on the device in training or on another new device.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: May 31, 2022
    Assignee: EDUTECHNOLOGIC, LLC
    Inventors: Brad Henry, Howell Dalton McCullough, IV
  • Patent number: 11347978
    Abstract: To suppress a sense of strangeness caused by mode transition when images obtained from a plurality of image capturing devices are fused. According to the present disclosure, it is provided an image processing apparatus including: a fusion processing unit that fuses a plurality of pieces of image information obtained from each of a plurality of image capturing devices that captures the same subject; and a fusion mode determination unit that determines a mode of the fusion in accordance with the value of a predetermined variable and sets, in accordance with each of a plurality of the modes, the threshold value of the variable for determining the mode.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: May 31, 2022
    Assignee: SONY CORPORATION
    Inventors: Shuichi Goto, Keisuke Ito
  • Patent number: 11347302
    Abstract: This specification describes a method comprising responding to a first gesture by a first user delimiting a visual virtual reality content portion from visual virtual reality content being consumed by the first user via a first head-mounted display by selecting the delimited visual virtual reality content portion, responding to a second gesture by the first user directed towards a content consumption device associated with a second user by identifying the content consumption device as a recipient of the selected visual virtual reality content portion, and causing the selected visual virtual reality content portion to be provided to the content consumption device for consumption by t second user.
    Type: Grant
    Filed: February 2, 2017
    Date of Patent: May 31, 2022
    Assignee: Nokia Technologies Oy
    Inventor: Francesco Cricri
  • Patent number: 11348306
    Abstract: A method of processing an image by a device obtaining one or more images including captured images of objects in a target space, generating metadata including information about mapping between the one or more images and a three-dimensional (3D) mesh model used to generate a virtual reality (VR) image of the target space, and transmitting the one or more images and the metadata to a terminal.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: May 31, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jae-yun Jeong, Do-wan Kim, Yong-gyoo Kim, Gun-hee Lee, Jae-kyeong Lee, Jin-bong Lee, Dai-woong Choi, Hyun-soo Choi
  • Patent number: 11343634
    Abstract: An apparatus and method for rendering an audio signal for a playback to a user is disclosed. In one example, the apparatus is configured to determine information about an orientation of a head of the user using an optical sensor. The apparatus is configured to determine information about an orientation of the optical sensor using an orientation sensor which is arranged in a predetermined positional relationship with respect to the optical sensor. The apparatus is configured to consider the information about the orientation of the optical sensor when determining the information about the orientation of the head. The apparatus is configured to perform a spatial rendering of an audio signal in dependence on the information about the orientation of the head of the user.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: May 24, 2022
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Dominik Häussler, Frederick Melville, Dennis Rosenberger, Stefan Döhla
  • Patent number: 11341224
    Abstract: A handheld biometric imaging device having an array of cameras configured to simultaneously capture face, iris and fingerprint biometrics of a subject. The device includes a plurality of visible-light cameras and a plurality of infrared-light cameras capable of being triggered simultaneously to obtain a plurality of images from which a 3D image of the light field can be constructed. The device includes a plurality of visible-light illuminators and a plurality of infrared-light illuminators that allow images of a subject to be captured under different lighting profiles. The device may include an onboard control system that is capable of reconstructing a face region, an iris region and a fingerprint region from the 3D light-field image, and then extract a corresponding face template, an iris template, and a fingerprint template from the respective reconstructed regions.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: May 24, 2022
    Assignee: UT Battelle, LLC
    Inventors: David S. Bolme, Hector J. Santos Villalobos, Aravind K. Mikkilineni
  • Patent number: 11334326
    Abstract: Systems, devices, and methods for software development or modification. The disclosed technology relates to transforming interactions with physical blocks by a human developer on an activity surface into computer-understandable digital information or logic for developing or modifying software (e.g., websites or mobile applications) in real-time or near real-time. The physical blocks are representative of software elements used in software development. For example, the structures, colors, shapes or hardness/softness/squeeze/bend/flex/elastic/shape-memory/rigid properties, whether symmetrical or asymmetrical, whether open-shaped or close-shaped of the physical blocks can determine which software elements are being developed and the arrangement of the blocks can be mapped to how the software elements are to be included in the software. Users located remotely from the developer can provide annotations or feedback to the software being developed in real-time.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: May 17, 2022
    Inventors: Adrian Andres Rodriguez-Velasquez, Ken Chhan
  • Patent number: 11333874
    Abstract: In some embodiments of SCAPE imaging systems, a Powell lens is used to expand light from a light source into a sheet of illumination light. An optical system sweeps the sheet of illumination light through a sample, and forms an image at an intermediate image plane from detected return light. A camera captures images of the intermediate image plane. In some embodiments of SCAPE imaging systems, an optical system sweeps the sheet of illumination light through a sample, and forms an image at an intermediate image plane from detected return light. A camera captures images of the intermediate image plane. In the latter embodiments, the optical system is deliberately misaligned with respect to a true alignment position so that a significant portion of light that would be lost at the true alignment position will arrive at the camera.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: May 17, 2022
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Elizabeth M. C. Hillman, Venkatakaushik Voleti
  • Patent number: 11328446
    Abstract: Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: May 10, 2022
    Assignee: Google LLC
    Inventors: Jie Tan, Gang Pan, Jon Karafin, Thomas Nonn, Julio C. Hernandez Zaragoza
  • Patent number: 11327313
    Abstract: The present disclosure relates to a method for displaying an image with a specific depth of field. The method comprises the steps of obtaining information data related to a focal distance adapted to a user gazing at a display, determining a pupil size of said user, estimating a depth of field of said user's eyes based on said focal distance and said pupil size, and rendering an image based on said depth of field to be displayed on said display. Further, the present disclosure relates to a system, a head-mounted display and a non-transitory computer readable medium.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: May 10, 2022
    Assignee: Tobii AB
    Inventor: Denny Rönngren
  • Patent number: 11321876
    Abstract: A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: May 3, 2022
    Assignee: NODAR Inc.
    Inventors: Leaf Alden Jiang, Philip Bradley Rosen, Piotr Swierczynski
  • Patent number: 11323603
    Abstract: Auto exposure processing for spherical images improves image quality by reducing visible exposure level variation along a stitch line within a spherical image. An average global luminance value is determined based on auto exposure configurations of first and second image sensors. Delta luminance values are determined for each of the image sensors based on the average global luminance value and a luminance variance between the image sensors. The auto exposure configurations of the image sensors are then updated using the delta luminance values, and the updated auto exposure configurations are used to capture images which are then combined to produce the spherical image. In some cases wherein updating the auto exposure configurations using the delta luminance values would breach a threshold representing a target luminosity for the scene, the use of the delta luminance values may be limited or those values may instead be discarded.
    Type: Grant
    Filed: August 14, 2020
    Date of Patent: May 3, 2022
    Assignee: GoPro, Inc.
    Inventors: Guillaume Matthieu Guérin, Sylvain Leroy, Yoël Taïeb, Giuseppe Moschetti
  • Patent number: 11320536
    Abstract: Provided are an imaging device and a monitoring device capable of accurately measuring a distance and a shape of a region of an object that is difficult to measure by one distance measuring camera. Provided is an imaging device including a sensor unit configured to irradiate an object with light and detect the light reflected by the object; a distance calculation unit configured to calculate a distance to the object on the basis of sensing data of the sensor unit; a specular reflector located on an opposite side of the sensor unit across the object; and a correction unit configured to correct an error included in the calculated distance, the error being caused by an interference between the light following a first path from the sensor unit toward the object and the light following a second path from the sensor unit, reflected by the specular reflector, and going toward the object.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: May 3, 2022
    Assignee: Sony Semiconductor Solutions Corporation
    Inventors: Yohei Ogura, Kensei Jo
  • Patent number: 11321875
    Abstract: A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: May 3, 2022
    Assignee: NODAR Inc.
    Inventors: Leaf Alden Jiang, Philip Bradley Rosen, Piotr Swierczynski
  • Patent number: 11323664
    Abstract: Embodiments of the present disclosure provide a wearable electronic device for providing audio output and capturing a visual media. The wearable electronic device includes a neckband including a pair of arms coupled by a central portion therebetween, and at least one image sensor disposed in the neckband. Further, the wearable electronic device includes a processor operatively coupled to a communication interface, and is configured to at least receive a control command through an application, for capturing the image data of a surrounding environment of the user. The processor is configured to trigger the at least one image sensor for capturing the image data of the surrounding environment of the user in real-time. Further, the processor transmits the image data being captured by the at least one image sensor in real-time, to the user device of the user for enabling the user to view the visual media of the surrounding environment.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: May 3, 2022
    Assignee: I Can See You Inc., The New Technology
    Inventor: Annette Jackson
  • Patent number: 11320832
    Abstract: A processor-implemented method includes: estimating, from frame images of consecutive frames acquired from one or more sensors, short-term ego-motion information of the one or more sensors; estimating long-term ego-motion information of the one or more sensors from the frame images; determining attention information from the short-term ego-motion information and the long-term ego-motion information; and determining final long-term ego-motion information of a current frame, of the consecutive frames, based on the long-term ego-motion information and the attention information.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: May 3, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Nahyup Kang, Hyun Sung Chang, Kyungboo Jung, Inwoo Ha
  • Patent number: 11308711
    Abstract: Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: April 19, 2022
    Assignee: Ultrahaptics IP Two Limited
    Inventors: David S. Holz, Hua Yang
  • Patent number: 11308705
    Abstract: Acquisition means of a display control device acquires taken images taken at a predetermined frame rate by image taking means, which is movable in a real space. First display control means estimates a current position and orientation of the image taking means based on the taken images, and combines a virtual image with one of the taken images to be displayed, the virtual image showing a view of a virtual three-dimensional object as seen from a virtual viewpoint based on a result of the estimation. Second display control means processes, in a frame subsequent to a frame on which the first display control means has performed display control, the virtual image based on movement information on a movement of the image taking means, and combines the processed virtual image with another one of the taken images to be displayed.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: April 19, 2022
    Assignee: RAKUTEN GROUP, INC.
    Inventor: Tomoyuki Mukasa
  • Patent number: 11310486
    Abstract: Three dimensional [3D] image data and auxiliary graphical data are combined for rendering on a 3D display (30) by detecting depth values occurring in the 3D image data, and setting auxiliary depth values for the auxiliary graphical data (31) adaptively in dependence of the detected depth values. The 3D image data and the auxiliary graphical data at the auxiliary depth value are combined based on the depth values of the 3D image data. First an area of attention (32) in the 3D image data is detected. A depth pattern for the area of attention is determined, and the auxiliary depth values are set in dependence of the depth pattern.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: April 19, 2022
    Assignee: Koninklijke Philips N.V.
    Inventors: Philip Steven Newton, Geradus Wilhelmus Theodorus Van Der Heijden, Wiebe De Haan, Johan Cornelis Talstra, Wilhelmus Hendrikus Alfonsus Bruls, Georgios Parlantzas, Marc Helbing, Christian Benien, Vasanth Philomin, Christiaan Varekamp
  • Patent number: 11302142
    Abstract: An electronic gaming machine includes a display, a digital camera device, a credit input mechanism, and a processor programmed to perform operations comprising: (i) receiving, from the digital camera device, a digital image of the player; (ii) determining an emotional state of the player by performing facial expression analysis on the digital image; (iii) determining an emotion level of the player by categorizing the emotional state of the player based on the determined emotional state, the categorizing includes a first state representing a positive emotional level and a second state representing another emotional level; (iv) determining that the emotional level is the other emotional level; and (v) automatically initiating a game session action during the game play session, the game session action is configured to cause the player to transition to the positive emotional level.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: April 12, 2022
    Assignee: Aristocrat Technologies Australia Pty Limited
    Inventor: Gregory Paul Schwartz
  • Patent number: 11297120
    Abstract: Network equipment for establishing a manifest to be provided to a requesting terminal configured to receive a multimedia content divided into segments from a network equipment, each segment being available in one or more representations, said manifest listing available representations for the multimedia content and specifying a plurality of adaptation sets, each adaptation set defining a spatial object of the multimedia content, the spatial objects of the adaptation sets defining a whole spatial object is described. The network equipment includes at least one memory and at least one processing circuitry configured to define, in the manifest, a type of mapping of the multimedia content to the whole spatial object and a point of reference in one adaptation set of reference amongst the adaptation sets, and associate depth information with each adaptation set.
    Type: Grant
    Filed: May 17, 2017
    Date of Patent: April 5, 2022
    Assignee: INTERDIGITAL MADISON PATENT HOLDINGS, SAS
    Inventors: Mary-Luc Champel, Sebastien Lasserre, Franck Galpin
  • Patent number: 11295473
    Abstract: Techniques related to improved continuous local 3D reconstruction refinement are discussed. Such techniques include constructing and solving per 3D object adjustment models in real time to generate a point cloud and/or updated camera parameters for each object adjustment model.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: April 5, 2022
    Assignee: Intel Corporation
    Inventor: Elad Tauber
  • Patent number: 11297217
    Abstract: In various embodiments, an electronic device includes a main body unit including a grip area for gripping by a user, a first joint portion formed at or connected to one end of the main body unit, a first sub-body unit connected to the main body unit by the first joint portion, and a communication circuit. The main body unit includes a main PCB including a processor, the first joint portion includes a first motor PCB including a control circuit and a first antenna radiator and being electrically connected to the main PCB, the processor allows the control circuit of the first motor PCB to control the first sub-body unit connected by the first joint portion, and the communication circuit transmits and receives signals to and from an external device via the first antenna radiator of the first motor PCB. Above this, various embodiments figured out through the specification are possible.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: April 5, 2022
    Inventors: Chae Up Yoo, Su Min Yun, Woo Suk Kang, Se Woong Kim
  • Patent number: 11287659
    Abstract: Augmented reality systems and methods for automatically repositioning a virtual object with respect to a destination object in a three-dimensional (3D) environment of a user are disclosed. The systems and methods can automatically attach the target virtual object to the destination object and re-orient the target virtual object based on the affordances of the virtual object or the destination object. The systems and methods can also track the movement of a user and detach the virtual object from the destination object when the user's movement passes a threshold condition.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: March 29, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Paul Armistead Hoover, Jonathan Lawrence Mann
  • Patent number: 11287883
    Abstract: Described are various embodiments of a light field device, pixel rendering method therefor, and vision perception system and method using same. One embodiment describes a method to adjust user perception of an image portion to be rendered via a set of pixels and a corresponding array of light field shaping elements (LFSE), the method comprising: projecting an adjusted image ray trace between a given pixel and a user pupil location to intersect an adjusted image location for a given perceived image depth given a direction of a light field emanated by the given pixel based on a given LFSE intersected thereby; upon the adjusted image ray trace intersecting a given image portion associated with the given perceived image depth, associating with the given pixel an adjusted image portion value designated for the adjusted image location based on the intersection; and rendering for each given pixel the adjusted image portion value associated therewith.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: March 29, 2022
    Assignee: Evolution Optiks Limited
    Inventors: Guillaume Lussier, Raul Mihali, Yaiza Garcia, Matej Goc, Daniel Gotsch
  • Patent number: 11282265
    Abstract: There is provided an image processing apparatus and an image processing method capable of transmitting data of a 3D model in object units. The image processing apparatus includes a 3D model selection unit that selects an object that satisfies a predetermined condition from among objects of a plurality of 3D models and a transmitter that transmits 3D model data of the selected object. The present technology is applied to, for example, an apparatus and the like for transmitting 3D model data of a 3D model via a network.
    Type: Grant
    Filed: June 15, 2018
    Date of Patent: March 22, 2022
    Assignee: SONY CORPORATION
    Inventor: Goh Kobayashi
  • Patent number: 11283970
    Abstract: A method for image processing, an electronic device, and a computer readable storage medium are provided. A first cached image is acquired by capturing a current scene through an imaging apparatus of an electronic device. The method includes: acquiring depth information of the current scene; and acquiring a foreground portion of the first cached image based on the depth information.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: March 22, 2022
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Jianbo Sun
  • Patent number: 11282287
    Abstract: The disclosed subject matter is directed to employing machine learning models configured to predict 3D data from 2D images using deep learning techniques to derive 3D data for the 2D images. In some embodiments, a method is provided that comprises receiving, by a system comprising a processor, a panoramic image, and employing, by the system, a three-dimensional data from two-dimensional data (3D-from-2D) convolutional neural network model to derive three-dimensional data from the panoramic image, wherein the 3D-from-2D convolutional neural network model employs convolutional layers that wrap around the panoramic image as projected on a two-dimensional plane to facilitate deriving the three-dimensional data.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: March 22, 2022
    Assignee: Matterport, Inc.
    Inventor: David Alan Gausebeck
  • Patent number: 11274922
    Abstract: The present disclosure provides a method and an apparatus for binocular ranging, capable of achieving an improved accuracy of binocular ranging.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: March 15, 2022
    Assignee: BEIJING TUSEN ZHITU TECHNOLOGY CO., LTD.
    Inventors: Naiyan Wang, Yuanqin Lu
  • Patent number: 11270110
    Abstract: A computer-implemented method for surface modeling includes: receiving one or more polarization raw frames of a surface of a physical object, the polarization raw frames being captured with a polarizing filter at different linear polarization angles; extracting one or more first tensors in one or more polarization representation spaces from the polarization raw frames; and detecting a surface characteristic of the surface of the physical object based on the one or more first tensors in the one or more polarization representation spaces.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: March 8, 2022
    Assignee: BOSTON POLARIMETRICS, INC.
    Inventors: Achuta Kadambi, Agastya Kalra, Supreeth Krishna Rao, Kartik Venkataraman
  • Patent number: 11272112
    Abstract: An image capturing apparatus comprises: a first image sensor having a plurality of pixels each counts a number of entering photons and outputs a count value as a first image signal; a second image sensor having a plurality of pixels each outputs an electric signal corresponding to a charge amount obtained by performing photoelectric conversion on entering light as a second image signal; and a generator that generates an image by selecting one of the first image signal and the second image signal.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: March 8, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Naoto Ogushi
  • Patent number: 11272162
    Abstract: Systems and methods for reducing or eliminating undesired effects of retro-reflections in imaging are disclosed. A system for reducing the undesired effects of retro-reflections may include an illuminator and an optical receiver. The illuminator is configured to emit an illumination signal for illuminating a scene. The optical receiver is configured to receive returned portions of the illumination signal scattered or reflected from the scene. Return signals from retroreflectors present in the scene may oversaturate or otherwise negatively affect sensors in the optical receiver. To limit return signals from retroreflectors that may be present in the scene, the illuminator and optical receiver are physically separated from each other by an offset distance that limits or prevents retro-reflections from the retroreflectors from being received by the optical receiver.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: March 8, 2022
    Assignee: NLIGHT, INC.
    Inventors: Bodo Schmidt, Steve Herman
  • Patent number: 11258963
    Abstract: An aspect of the present disclosure is an image creation device installed in a vehicle. The image creation device includes an image capture section and a bird's-eye view image creation section. The image capture section acquires a captured image within a peripheral range around the vehicle. The bird's-eye view image creation section creates a bird's-eye view image. The bird's-eye view image creation section includes a contour extraction section, a region discrimination section, a first creation section, a second creation section, and an image combining section. The contour extraction section extracts the contour shape of an object in the captured image. The region discrimination section judges whether the contour region is a three-dimensional object region or a road surface region. The first creation section creates a stereoscopic image, and the second creation section creates a planar view image. The image combining section combines the stereoscopic image and the planar view image.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: February 22, 2022
    Assignee: DENSO CORPORATION
    Inventors: Yoshinori Ozaki, Hirohiko Yanagawa
  • Patent number: 11252306
    Abstract: An electronic device is provided. The electronic device includes a first camera, a second camera spaced apart from the first camera, and a processor. The processor is configured to obtain a first image of external objects using the first camera, obtain a second image of the external objects using the second camera, identify a specified object in which pieces of depth information are generated from among the external objects included in the first image and the second image based on phase difference comparison between the first image and the second image, then select depth information about the specified object among the pieces of depth information based on a degree of spreading of a point corresponding to the specified object included in at least one of the first image and the second image, and generate depth information about the external objects including the specified object using the selected depth information.
    Type: Grant
    Filed: April 17, 2019
    Date of Patent: February 15, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kyungdong Yang, Jaehyoung Park, Yungmok Yu, Dongbum Choi, Jonghun Won
  • Patent number: 11250641
    Abstract: Mating virtual objects in virtual reality environment, involves generating a bounding box having a plurality of faces corresponding to a plurality of exterior surfaces of a subject virtual object. A spatial mesh corresponding to surfaces of the real world environment is generated. A magnetic mate is generated to initially align a bounding box first face to a first spatial mesh surface. A shadow mate is provided between a bounding box second face and a second spatial mesh surface, by projecting a virtual ray from the subject virtual object bounding box second face toward the second spatial mesh surface, determining a mate point corresponding to an intersection of the virtual ray and the second spatial mesh surface, and displaying a mating button in the virtual reality environment at the mate point.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: February 15, 2022
    Assignee: Dassault Systemes SolidWorks Corporation
    Inventors: Yun Li, Yaqin Huang, Eric Hasan
  • Patent number: 11244978
    Abstract: Photoelectric conversion apparatus includes semiconductor layer in which first photoelectric converters are arranged in light-receiving region and second photoelectric converters are arranged in light-shielded region, light-shielding wall arranged above the semiconductor layer and defining apertures respectively corresponding to the first photoelectric converters, and light-shielding film arranged above the semiconductor layer. The light-shielding film includes first portion extending along principal surface of the semiconductor layer to cover the second photoelectric converters. The first portion has lower surface and upper surface. The light-shielding wall includes second portion whose distance from the semiconductor layer is larger than distance between the upper surface and the principal surface. Thickness of the first portion in direction perpendicular to the principal surface is larger than thickness of the second portion in direction parallel to the principal surface.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: February 8, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventors: Toshiyuki Ogawa, Sho Suzuki, Takehito Okabe, Mitsuhiro Yomori, Yukinobu Suzuki, Akihiro Kawano, Tsutomu Tange
  • Patent number: 11238610
    Abstract: Systems and methods are described for placing large objects and objects separated by large distances in an AR environment. An AR headset system may place and generate digital objects using relative geographical coordinates (e.g., latitude, longitude, and altitude) between the user's current position and the object being placed. In one implementation, a digital object's geographical coordinates may be calculated by determining a user's geographical coordinates, using a distance determination device to measure a distance to a boundary in the user's real-world environment within an AR headset's field of view, and calculating an orientation of the AR headset relative to the user's position.
    Type: Grant
    Filed: August 10, 2016
    Date of Patent: February 1, 2022
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Mehul Patel, James Voris
  • Patent number: 11238616
    Abstract: In one implementation, a device has a processor, a projector, a first infrared (IR) sensor, a second IR sensor, and instructions stored on a computer-readable medium that are executed by the processor to estimate the sensor-to-sensor extrinsic parameters. The projector projects IR pattern elements onto an environment surface. The first sensor captures a first image including first IR pattern elements corresponding to the projected IR pattern elements and the device estimates 3D positions for first IR pattern elements. The second IR sensor captures a second image including second IR pattern elements corresponding to the projected IR pattern elements and the device matches the first IR pattern elements and the second IR pattern elements. Based on this matching, the device estimates a second extrinsic parameter corresponding to a spatial relationship between the first IR sensor and the second IR sensor.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: February 1, 2022
    Assignee: Apple Inc.
    Inventors: Rohit Sethi, Lejing Wang, Jonathan Pokrass
  • Patent number: 11240446
    Abstract: In order to be able to appropriately control whether to generate a wide-angle image in accordance with control of an imaging range of the plurality of imaging units, an imaging device includes: a first imaging unit and a second imaging unit, each of which is movable in a predetermined direction; a combination processing unit configured to combine a first image obtained by the first imaging unit and a second image obtained by the second imaging unit to generate a wide-angle image; a determination unit configured to determine whether or not to generate the wide-angle image by the combination processing unit, based on a relation between an imaging range of the first imaging unit and an imaging range of the second imaging unit, or a position relation between the first imaging unit and the second imaging unit in the predetermined direction.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: February 1, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Aihiko Numata
  • Patent number: 11237640
    Abstract: Embodiments of the subject matter described herein provide a wearable device enabling multi-finger gestures. The wearable device generally includes a first sensor, a second sensor and a controller. The first sensor can detect a first set of one or more movements of a first finger of a user. The second sensor can detect a second set of one or more movements of a second finger that is different from the first finger. The controller is configured to detect a multi-finger gesture by determining a relative movement between the first finger and the second finger based on the first and second sets of movements and to control a terminal device in association with the wearable devices based on the multi-finger gesture.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: February 1, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Haichao Zhu, Masaaki Fukumoto
  • Patent number: 11232315
    Abstract: An image depth determining method and a living body identification method, a circuit, a device, and a medium. The image depth determining method includes obtaining pixel coordinates of a feature point pair associated with an object point on a subject, determining a first straight line passing through the origin of a first camera coordinate system of a first camera based on first pixel coordinates and intrinsic parameters of the first camera, determining a second straight line passing through the origin of a second camera coordinate system of a second camera based on second pixel coordinates and intrinsic parameters of the second camera, and determining the depth of the object point based on the first straight line, the second straight line, and extrinsic parameters describing a relative position relationship between the first camera and the second camera.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: January 25, 2022
    Assignee: NEXTVPU (SHANGHAI) CO., LTD.
    Inventors: Shu Fang, Ji Zhou, Xinpeng Feng
  • Patent number: 11232635
    Abstract: Augmented reality systems and methods for creating, saving and rendering designs comprising multiple items of virtual content in a three-dimensional (3D) environment of a user. The designs may be saved as a scene, which is built by a user from pre-built sub-components, built components, and/or previously saved scenes. Location information, expressed as a saved scene anchor and position relative to the saved scene anchor for each item of virtual content, may also be saved. Upon opening the scene, the saved scene anchor node may be correlated to a location within the mixed reality environment of the user for whom the scene is opened. The virtual items of the scene may be positioned with the same relationship to that location as they have to the saved scene anchor node. That location may be selected automatically and/or by user input.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: January 25, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Jonathan Brodsky, Javier Antonio Busto, Martin Wilkins Smith
  • Patent number: 11226475
    Abstract: The disclosure provides for structured illumination microscopy (SIM) imaging systems. In one set of implementations, a SIM imaging system may be implemented as a multi-arm SIM imaging system, whereby each arm of the system includes a light emitter and a beam splitter (e.g., a transmissive diffraction grating) having a specific, fixed orientation with respect to the system's optical axis. In a second set of implementations, a SIM imaging system may be implemented as a multiple beam splitter slide SIM imaging system, where one linear motion stage is mounted with multiple beam splitters having a corresponding, fixed orientation with respect to the system's optical axis. In a third set of implementations, a SIM imaging system may be implemented as a pattern angle spatial selection SIM imaging system, whereby a fixed two-dimensional diffraction grating is used in combination with a spatial filter wheel to project one-dimensional fringe patterns on a sample.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: January 18, 2022
    Assignees: ILLUMINA, INC., ILLUMINA CAMBRIDGE LIMITED
    Inventors: Peter Clarke Newman, Danilo Condello, Shaoping Lu, Simon Prince, Merek C. Siu, Stanley S. Hong, Aaron Liu, Gary Mark Skinner, Geraint Wyn Evans
  • Patent number: 11222219
    Abstract: Disclosed embodiments pertain to a method for determining position information of a target vehicle relative to an ego vehicle. The method may comprise: obtaining, by at least one image sensor, first images of one or more target vehicles and classifying at least one target vehicle from the one or more target vehicles based on the one or more first images. Further, vehicle characteristics corresponding to the least one target vehicle may be obtained based on the classification of the least one target vehicle. Position information of the at least one target vehicle relative to the ego vehicle may be determined based on the vehicle characteristics.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: January 11, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Benjamin Lund, Anthony Blow, Edwin Chongwoo Park
  • Patent number: 11215827
    Abstract: Systems and methods for projecting each of a chronology of images as a sequence of images using a shifting element as part of a near-eye display system are provided for use in virtual reality, augmented reality, or mixed reality systems. In some example embodiments, a chronology of images is received by a peripheral sequencing system. The system divides each image into image portions and generates sequences of image portions to recreate the images based on arrangement data. The system then causes a high-speed display of each sequence of images such that they appear simultaneous to a viewer. In some embodiments, the projection is transmitted to a shifting optical element such as a rotating micromirror that propagates a display to a user. In some embodiments, the system further detects and corrects for image and environmental distortions.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: January 4, 2022
    Assignee: Snaps Inc.
    Inventor: Zhibin Zhang
  • Patent number: 11217031
    Abstract: An electronic device and its operating method according to various embodiments, may be configured to provide first content received from an external electronic device to a user waring the electronic device on his/her face using a display of the electronic device, identify a movement of an external object in a visible direction of the user see-through or using the display which provides the first content, and provide second content to the user through at least part of the display so as to display the second content related to the first content in relation to the external object according to the movement.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: January 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Thae Geol Lee, Hyun Soo Kwak, Yeon Hee Rho, Ki Huk Lee, Sung Hyo Jeong, Cheol Ho Cheong
  • Patent number: 11215700
    Abstract: A method and system for real-time motion artifact handling and noise removal for time-of-flight (ToF) sensor images. The method includes: calculating values of a cross correlation function c(?) at a plurality of temporally spaced positions or phases from sent (s(t)) and received (r(t)) signals, thereby deriving a plurality of respective cross correlation values [c(?0), c(?1), c(?2), c(?3)]; deriving, from the plurality of cross correlation values [c(?0), c(?1), c(?2), c(?3)], a depth map D having values representing, for each pixel, distance to a portion of an object upon which the sent signals (s(t)) are incident; deriving, from the plurality of cross correlation values [c(?0), c(?1), c(?2), c(?3)], a guidance image (I; I?); and generating an output image D? based on the depth map D and the guidance image (I; I?), the output image D? comprising an edge-preserving and smoothed version of depth map D, the edge-preserving being from guidance image (I; I?).
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: January 4, 2022
    Assignee: IEE INTERNATIONAL ELECTRONICS & ENGINEERING S.A.
    Inventors: Cedric Schockaert, Frederic Garcia Becerro, Bruno Mirbach
  • Patent number: 11210521
    Abstract: An information processing apparatus includes a storing section configured to store scenario information and device information associated with the scenario information. The scenario information includes information configured such that another information processing apparatus executes presentation or reception of predetermined information when the other information processing apparatus determines that a predetermined condition is satisfied. The device information includes information representing specifications of hardware of the other information processing apparatus required when the other information processing apparatus executes at least one of the determination that the predetermined condition is satisfied and the presentation or the reception of the predetermined information.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: December 28, 2021
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Masashi Aonuma, Kaoru Yamaguchi
  • Patent number: 11212464
    Abstract: A method of generating at least one image of a real environment comprises providing at least one environment property related to at least part of the real environment, providing at least one virtual object property related to a virtual object, determining at least one imaging parameter according to the at least one provided virtual object property and the at least one provided environment property, and generating at least one image of the real environment representing information about light leaving the real environment according to the determined at least one imaging parameter, wherein the light leaving the real environment is measured by at least one camera.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: December 28, 2021
    Assignee: Apple Inc.
    Inventors: Sebastian Knorr, Daniel Kurz