Picture Signal Generator Patents (Class 348/46)
  • Patent number: 10332259
    Abstract: An image processing apparatus inputs image data representing a plurality of images from mutually different viewpoints, estimates first information indicating a disparity between the plurality of images by comparing image regions each having a first size between the plurality of images, and identifies image regions each having a second size different from the first size, between the plurality of images, based on the first information estimated. The estimation estimates second information indicating the magnitude of a disparity between the plurality of images in the identified image regions by comparing the image regions each having the second size between the plurality of images.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: June 25, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventors: Tomohiro Nishiyama, Koichi Fukuda
  • Patent number: 10334190
    Abstract: In illustrative implementations, a set of separate modulation signals simultaneously modulates a plurality of pixels in a superpixel by a set of separate modulation signals, such that each pixel in the superpixel is modulated by a modulation signal that causes sensitivity of the pixel to vary over time. Each superpixel comprises multiple pixels. In some implementations, the sensitivity of a pixel to incident light is controlled by storage modulation or by light modulation. In some implementations, this invention is used for 3D scanning, i.e., for detection of the 3D position of points in a scene.
    Type: Grant
    Filed: January 15, 2018
    Date of Patent: June 25, 2019
    Assignee: Photoneo, s.r.o.
    Inventors: Tomas Kovacovsky, Michal Maly, Jan Zizka
  • Patent number: 10334232
    Abstract: A depth-sensing device and its method are provided. The depth-sensing device includes a projection device, an image capture device, and an image processing device. The projection device projects a first projection pattern to a field at a first time and projects a second projection pattern to the same field at a second time. The density of the first projection pattern is lower than the density of the second projection pattern. The image capture device captures the first projection pattern projected to the field at the first time to obtain a first image and captures the second projection pattern projected to the field at the second time to obtain a second image. The image processing device processes the first and second images to obtain two depth maps and at least merges the depth maps to generate a final depth map of the field.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: June 25, 2019
    Assignee: HIMAX TECHNOLOGIES LIMITED
    Inventors: Chin-Jung Tsai, Yi-Nung Liu
  • Patent number: 10321255
    Abstract: A speaker location identifying system includes a camera which acquires a photographed image. The speaker location system recognizes an image of a speaker included in the photographed image, specifies a position of the speaker, based on a position and size in the photographed image of the recognized speaker, and decides a parameter of an audio signal outputted to the speaker, based on the specified position of the speaker.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: June 11, 2019
    Assignee: Yamaha Corporation
    Inventor: Hideaki Shimada
  • Patent number: 10321126
    Abstract: Systems and methods for capturing a two dimensional (2D) image of a portion of a three dimensional (3D) scene may include a computer rendering a 3D scene on a display from a user's point of view (POV). A camera mode may be activated in response to user input and a POV of a camera may be determined. The POV of the camera may be specified by position and orientation of a user input device coupled to the computer, and may be independent of the user's POV. A 2D frame of the 3D scene based on the POV of the camera may be determined and the 2D image based on the 2D frame may be captured in response to user input. The 2D image may be stored locally or on a server of a network.
    Type: Grant
    Filed: July 7, 2015
    Date of Patent: June 11, 2019
    Assignee: zSpace, Inc.
    Inventors: Jonathan J. Hosenpud, Arthur L. Berman, Jerome C. Tu, Kevin D. Morishige, David A. Chavez
  • Patent number: 10311378
    Abstract: A depth detection apparatus is described which has a memory storing raw time-of-flight sensor data received from a time-of-flight sensor. The depth detection apparatus also has a trained machine learning component having been trained using training data pairs. A training data pair comprises at least one simulated raw time-of-flight sensor data value and a corresponding simulated ground truth depth value. The trained machine learning component is configured to compute in a single stage, for an item of the stored raw time-of-flight sensor data, a depth value of a surface depicted by the item, by pushing the item through the trained machine learning component.
    Type: Grant
    Filed: August 8, 2017
    Date of Patent: June 4, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sebastian Nowozin, Amit Adam, Shai Mazor, Omer Yair
  • Patent number: 10313655
    Abstract: To provide an image capture device capable of doing multiple image captures by using multiple image capture units and capable of measuring a distance between each of the image capture units and a target more correctly. An image capture device according to the present invention is an image capture device with multiple image capture units. The image capture device comprises: one light emission unit for distance measurement that emits a reference beam; and the multiple image capture units that capture images of a reflected beam of the reference beam while having common timing of image capture.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: June 4, 2019
    Assignee: FANUC CORPORATION
    Inventors: Minoru Nakamura, Yuuki Takahashi, Atsushi Watanabe
  • Patent number: 10310598
    Abstract: A head-mounted display (HMD) includes an electronic display configured to emit image light, an optical assembly that provides optical correction to the image light, an eye tracking system, and a varifocal module. The optical assembly includes a back optical element configured to receive the image light from the electronic display, and a coupling assembly configured to couple a front optical element to a location within the optical assembly such that the front optical element receives light transmitted by the back optical element. The optical correction is determined in part by an optical characteristic of the front optical element that is replaceable. The eye tracking system determines eye tracking information for a first eye of a user of the HMD. A varifocal module adjusts focus of images displayed on the electronic display, based on the eye tracking information and the optical correction.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: June 4, 2019
    Assignee: Facebook Technologies, LLC
    Inventors: Nicholas Daniel Trail, Douglas Robert Lanman
  • Patent number: 10306209
    Abstract: An apparatus is described that includes a camera system having a time-of-flight illuminator. The time of flight illuminator has a light source and one or more tiltable mirror elements. The one or more tiltable mirror elements are to direct the illuminator's light to only a region within the illuminator's field of view.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: May 28, 2019
    Assignee: Google LLC
    Inventors: Jamyuen Ko, Chung Chun Wan
  • Patent number: 10298835
    Abstract: The present application discloses various imaging control methods and apparatuses, and various imaging devices. One of the imaging control methods includes: determining, at least according to depth information of an object to be photographed, target in focus depth positions respectively corresponding to at least two imaging sub-areas of the image sensor; controlling deformation of the image sensor so as to make depth positions of reference points of the at least two imaging sub-areas after the deformation approach or coincide with the corresponding target in focus depth positions; and acquiring, based on the image sensor after the deformation, an image of the object. Technical solutions provided by the present application may improve quality of imaging.
    Type: Grant
    Filed: March 4, 2016
    Date of Patent: May 21, 2019
    Assignee: BEIJING ZHIGU RUI TUO TECH CO., LTD.
    Inventor: Lin Du
  • Patent number: 10298905
    Abstract: An apparatus receives an image and an associated depth map comprising input depth values. A contour detector (405) detects contours in the image. A model processor (407) generates a contour depth model for a contour by fitting a depth model to input depth values for the contour. A depth value determiner (409) determines a depth model depth value for at least a one pixel of the contour from the contour depth model. The depth model may e.g. correspond to a single depth value which e.g. may be set to a maximum input depth value. A modifier (411) generates a modified depth map from the associated depth map by modifying depth values of the associated depth map. This includes generating a modified depth value for a pixel in the modified depth map in response to the depth model depth value. The depth model depth value may e.g. replace the input depth value for the pixel.
    Type: Grant
    Filed: June 15, 2016
    Date of Patent: May 21, 2019
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Christiaan Varekamp, Patrick Luc Els Vandewalle
  • Patent number: 10287116
    Abstract: A method is provided for loading a motor vehicle using an optimized loading strategy. The optimized loading strategy is determined with the help of a computer. The method includes the following acts: acquiring characteristics of a plurality of objects to be transported, using a recording device, for example a camera and/or a scanner and/or using a reader device such as an RFID receiver; using the acquired characteristic to determine the dimensions, particularly a height, width and depth of each of the objects; establishing the optimized loading strategy for the motor vehicle as a function of the dimensions of the objects that have been determined and depending on the type of motor vehicle being loaded; and visualizing the loading strategy.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: May 14, 2019
    Assignee: Bayerische Motoren Werke Aktiengesellschaft
    Inventors: Nico Daun, Atanas Gegov, Daniel Liebau, Claudia Langner
  • Patent number: 10290194
    Abstract: System and techniques for an occupancy sensor are described herein. Images from a camera can be received. Here, the camera has a certain a field of view. A proximity indicator from a proximity detector can be received when an object enters the field of view. The images from the camera are processed to provide an occupancy indication. A first technique is used for the occupancy indication in the absence of the proximity indicator and a second technique is used otherwise.
    Type: Grant
    Filed: February 29, 2016
    Date of Patent: May 14, 2019
    Assignee: Analog Devices Global
    Inventor: Akshayakumar Haribhatt
  • Patent number: 10290115
    Abstract: The invention relates to a device for determining the volume of an object moved by an industrial truck and to a corresponding method. The device comprises a first depth image sensor (3) and a second depth image sensor (4), which are arranged in such a way that the object (1) can be sensed from two different directions as the object passes through the passage region (5), wherein the device (10) is designed to produce a sequence of individual images in a first resolution by means of each of the depth image sensors (3, 4), whereby the industrial truck (2) and the object (1) are sensed from different viewing angles as the industrial truck and the object pass through the passage region (5).
    Type: Grant
    Filed: July 15, 2015
    Date of Patent: May 14, 2019
    Assignee: CARGOMETER GmbH
    Inventor: Michael Baumgartner
  • Patent number: 10288882
    Abstract: Provided is a head mounted display (HMD) device that lets a user organize added information and thereby makes it possible to improve the ease with which the information can be perceived. A head mounted display device enables a user to visually recognize a virtual image overlaid on a scene. A control unit virtually sets a plurality of display zones, having different depth-direction positions, in front of the user; identifies a display zone being gazed at by the user, on the basis of gaze position information from a gaze position detection unit; drives a focus position adjusting unit so as to align the depth position of the identified display zone with the focus position of a virtual image of the displayed image; acquires, from a cloud server via a communication unit, image information indicating information associated with the identified display zone; and causes a display to display the display image corresponding to the acquired image information.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: May 14, 2019
    Assignee: NIPPON SEIKI CO., LTD.
    Inventors: Teruko Ishikawa, Ikuyo Sasajima, Yuki Takahashi, Tadashige Makino
  • Patent number: 10284752
    Abstract: A method is provided for determining a start offset between a video recording device and an inertial measurement unit (IMU) for use in synchronizing motion data of an object collected by the IMU attached to the object with video frames captured by an image sensor of the video recording device of the object in motion. The start offset is then used to synchronize subsequently captured video frames to subsequently collected IMU motion data.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: May 7, 2019
    Assignee: BioForce Analytics LLC
    Inventors: Eric L. Canfield, Connor D. Cozad, Scott J. Soma, Robert P. Alston, Andrew D. McEntee, Robert P. Warner, Brandon T. Fanti, Vineeth Voruganti, Daniel J. Gao, Aron Sun, Joseph H. Cottingham, Ryan M. Larue, Saahas S. Yechuri
  • Patent number: 10275857
    Abstract: A method for enhancing a depth image of a scene, comprises calculating an enhanced depth image by blending a first filtered depth image with a second filtered depth image or with the original depth image. The blending is achieved by application of a blending map, which defines, for each pixel, a contribution to the enhanced depth image of the corresponding pixel of the first filtered depth image and of the corresponding pixel of either the second filtered depth image or the original depth image. For pixels in the depth image containing no depth value or an invalid depth value, the blending map defines a zero contribution of the corresponding pixel of the second filtered depth image and a 100% contribution of the corresponding pixel of the first filtered image.
    Type: Grant
    Filed: September 9, 2013
    Date of Patent: April 30, 2019
    Assignee: IEE INTERNATIONAL ELECTRONICS & ENGINEERING S.A.
    Inventors: Bruno Mirbach, Thomas Solignac, Frederic Garcia Becerro, Djamila Aouada
  • Patent number: 10275932
    Abstract: Embodiments of the invention relate to a display operable in 2D and 3D display modes. Methods and apparatus are provided for adjusting the colors and brightness of the image data and/or intensity of the display backlight based on the current display mode and/or color-grading of the image data. For example, when switching to a 3D display mode a color mapping may be performed on left and right eye image data to increase color saturation in particular regions, and/or the backlight intensity may be increased in particular regions to compensate for lower light levels in 3D display mode.
    Type: Grant
    Filed: March 4, 2015
    Date of Patent: April 30, 2019
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Eric Kozak, Robin Atkins
  • Patent number: 10262458
    Abstract: Techniques associated with three-dimensional object modeling are described in various implementations. In one example implementation, a method may include receiving a plurality of two-dimensional images depicting views of an object to be modeled in three dimensions. The method may also include, processing the plurality of two-dimensional images to generate a three-dimensional representation of the object, and analyzing the three-dimensional representation of the object to determine whether sufficient visual information exists in the plurality of two-dimensional images to generate a three-dimensional model of the object. The method may also include, in response to determining that sufficient visual information does not exist for a portion of the object, identifying the portion of the object to a user.
    Type: Grant
    Filed: May 31, 2013
    Date of Patent: April 16, 2019
    Assignee: LONGSAND LIMITED
    Inventors: Sean Blanchflower, George Saklatvala
  • Patent number: 10255889
    Abstract: Embodiments of the present application disclose various light field display control methods and apparatuses and various light field display devices, wherein a light field display control method disclosed comprises: sampling a source image according to interest level distribution information of the source image; determining a light field image corresponding to the sampled source image; adjusting display pixel density distribution of a display of a light field display device at least according to the interest level distribution information; and displaying the light field image via the adjusted light field display device. The technical solution provided in the embodiments of the present application can make full use of pixels of the display of the light field display device to present differential spatial resolution of different regions of a light field display image.
    Type: Grant
    Filed: November 20, 2015
    Date of Patent: April 9, 2019
    Assignee: Beijing Zhigu Rui Tuo Tech Co., Ltd.
    Inventors: Lin Du, Liang Zhou
  • Patent number: 10254264
    Abstract: An apparatus and a method for monitoring preparation of a food product are disclosed. The apparatus may include an imager and a controller. The controller may be configured to execute a method having the following steps: receiving order related data; receiving an image of the food product from the imager; analyzing the received image based on pre-stored data, received from a database, in order to extract prepared product data; comparing the extracted prepared product data to the order related data; and determining a compliance of the food product with a required quality level based on comparing the extracted prepared product data to the order related data.
    Type: Grant
    Filed: April 26, 2017
    Date of Patent: April 9, 2019
    Assignee: DRAGONTAIL SYSTEMS LTD.
    Inventor: Ido Levanon
  • Patent number: 10249020
    Abstract: An image processing unit has an image acquisition part, a correlation determination part, a reference image generation part, and an interpolation image generation part. The image acquisition part acquires an original image. The correlation determination unit determines whether the correlation of an image component of a primary reference band with image components of respective bands other than the primary reference band is either high correlation or low correlation. The reference image generation part interpolates missing pixels in the image component of the primary reference band by switching the interpolation method based on the correlation determination result obtained by the correlation determination part. The interpolation image generation part interpolates, using the correlation determination result and the primary reference image, missing pixels in at least some of the image component of the primary reference band.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: April 2, 2019
    Assignees: TOKYO INSTITUTE OF TECHNOLOGY, OLYMPUS CORPORATION
    Inventors: Sunao Kikuchi, Yusuke Monno, Daisuke Kiku, Masayuki Tanaka, Masatoshi Okutomi
  • Patent number: 10241565
    Abstract: In response to receiving a captured user image while the output image is being displayed to the user, display of the output image is controlled so as to reflect characteristics of the user that is determined based on the captured user image and a user instruction that is recognized based on the captured user image.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: March 26, 2019
    Assignee: Ricoh Company, Ltd.
    Inventors: Hiroshi Yamaguchi, Fumiyo Kojima, Takahiro Yagishita, Masaya Katoh
  • Patent number: 10237534
    Abstract: An imaging device includes an image sensor circuit including a pixel element. The pixel element is configured to receive during a first receiving time interval electromagnetic waves having a first wavelength, and to receive during a subsequent second receiving time interval electromagnetic waves having a second wavelength. The imaging device includes an image processing circuit configured to produce a color image of the object based on a first pixel image data and a second pixel image data. The first pixel image data is based on the electromagnetic waves having the first wavelength received by the pixel element during the first receiving time interval. The second pixel image data is based on the electromagnetic waves having the second wavelength received by the pixel element during the second receiving time interval.
    Type: Grant
    Filed: July 7, 2014
    Date of Patent: March 19, 2019
    Assignee: Infineon Technologies AG
    Inventor: Dirk Offenberg
  • Patent number: 10237477
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images and determine when a three hundred sixty degree view of the object has been captured. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation, such as a three hundred sixty degree rotation, through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: March 19, 2019
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Patent number: 10234287
    Abstract: A system is disclosed that comprises a camera module and a control and evaluation unit. The camera module is designed to be attached to the surveying pole and comprises at least one camera for capturing images. The control and evaluation unit has stored a program with program code so as to control and execute a functionality in which a series of images of the surrounding is captured with the at least one camera; a SLAM-evaluation with a defined algorithm using the series of images is performed, wherein a reference point field is built up and poses for the captured images are determined; and, based on the determined poses, a point cloud comprising 3D-positions of points of the surrounding can be computed by forward intersection using the series of images, particularly by using dense matching algorithm.
    Type: Grant
    Filed: May 5, 2014
    Date of Patent: March 19, 2019
    Assignee: HEXAGON TECHNOLOGY CENTER GMBH
    Inventors: Knut Siercks, Bernhard Metzler, Elmar Vincent Van Der Zwan, Thomas Fidler, Roman Parys, Alexander Velizhev, Jochen Scheja
  • Patent number: 10235118
    Abstract: An image processing method includes acquiring a three-dimensional model obtained by modeling a plurality of objects included in a work space, acquiring, from a camera which is hold by a user, an image captured by the camera, the user existing in the work space, acquiring, from a sensor which is hold by the user, distance information indicating distances between the sensor and each of the plurality of objects, determining a position of the user in the work space based on the three-dimensional model and the distance information, identifying a predetermined region closest to the position of the user among at least one of predetermined regions defined in the three-dimensional model, generating a display screen displaying the predetermined region and the image, and outputting the display screen to another computer.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: March 19, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Ayu Karasudani, Nobuyasu Yamaguchi
  • Patent number: 10237666
    Abstract: An electronic device includes a housing and a user interface. The electronic device also includes an acoustic detector and one or more processors operable with the acoustic detector. The one or more processors can receive, from the user interface, user input corresponding to an operation of the electronic device. The one or more processors can then optionally initiate a timer in response to receiving the user input and monitor the acoustic detector for a predefined acoustic marker, one example of which is acoustic data indicating detection of one or more finger snaps. Where the one or more finger snaps occur prior to expiration of the timer, the one or more processors can perform the operation of the electronic device. Otherwise ignore the user input. The acoustic confirmation of user input helps to eliminate false triggers, thereby conserving battery power and extending run time.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: March 19, 2019
    Assignee: Google Technology Holdings LLC
    Inventors: Su-Yin Gan, Alex Vaz Waddington
  • Patent number: 10229531
    Abstract: A method and a device for testing a control unit, in which sensor data are transmitted over a network connection to a real or simulated control unit, which data are calculated by a data processing system using simulation, in which the simulation of the sensor data takes place at least in part with at least one graphics processor of at least one graphics processor unit of the data processing system. The simulated sensor data are encoded in image data that are output via a visualization interface to a data conversion unit that simulates a visualization unit connected to the visualization interface. Via the data conversion unit the received image data are converted into packet data containing the sensor data through the network connection to the control unit.
    Type: Grant
    Filed: May 27, 2015
    Date of Patent: March 12, 2019
    Assignee: dSPACE digital signal processing and control engineering GmbH
    Inventors: Carsten Scharfe, Thorsten Pueschl
  • Patent number: 10229483
    Abstract: A subject information acquisition unit 12 acquires imaged subject information indicative of attributes related to illumination for a subject from a taken image. A preset information selection unit 21 selects preset information as illumination setting information for setting an illumination environment according to a user operation. An illumination setting information adjustment unit adjusts the illumination setting information selected by the illumination setting information selection unit to illumination setting information corresponding to the subject on the basis of the imaged subject information acquired by the subject information acquisition unit. By simply selecting the preset information, it is possible to set easily an illumination environment for relighting, for example.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: March 12, 2019
    Assignee: SONY CORPORATION
    Inventors: Kentaro Doba, Yasutaka Hirasawa, Masaki Handa
  • Patent number: 10223765
    Abstract: Provided is an information processing apparatus including an image supply unit that supplies a plurality of input images showing corresponding objects to an image processing unit and obtains a plurality of object images as an image processed result from the image processing unit, and a display control unit that synchronously displays the plurality of object images that have been obtained. The object images are regions including the corresponding objects extracted from the plurality of input images, and orientations, positions, and sizes of the corresponding objects of the plurality of object images are unified.
    Type: Grant
    Filed: August 23, 2013
    Date of Patent: March 5, 2019
    Assignees: Sony Corporation, Japanese Foundation For Cancer Research
    Inventor: Takeshi Ohashi
  • Patent number: 10223597
    Abstract: The disclosure provides a method for calculating a passenger crowdedness degree, comprising: establishing a video data collection environment and starting collecting video data of passengers getting on and off; reading the collected video data of passengers getting on and off and pre-processing a plurality of successive image frames of the video data; identifying a human head according to the pre-processing result and taking the detected human head as a target object to be tracked by mean-shift; and judging the behaviors of getting on and off of a passenger in the area where the target object is positioned and determining the crowdedness degree of passengers inside a vehicle according to the numbers of the passengers getting on and off. The disclosure also provides a system for calculating a passenger crowdedness degree. The disclosure can effectively reduce the false detection, leak detection and error detection of the head top.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: March 5, 2019
    Assignee: SHENZHEN UNIVERSITY
    Inventors: Yong Zhang, Lei Liu, Dongning Zhao, Yanshan Li, Jianyong Chen
  • Patent number: 10222625
    Abstract: A display device includes a video element and a parallax optical element. The video element projects, onto a windshield, a stereoscopic vision image in which a plurality of rectangular left-eye images and a plurality of rectangular right-eye images are alternately arrayed in a first array pattern. The parallax optical element is provided between the video element and the windshield and includes a plurality of separation parts arrayed in a lattice-like fashion and in a second array pattern; the separation parts separating the stereoscopic vision image into the left-eye images and the right-eye images. At least one of the first array pattern and the second array pattern is a nonuniform array pattern that conforms to a curved surface of the windshield.
    Type: Grant
    Filed: May 8, 2015
    Date of Patent: March 5, 2019
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Kenichi Kasazumi, Toshiya Mori
  • Patent number: 10216075
    Abstract: A digital light projector having a plurality of color channels including at least one visible color channel providing visible light and at least one invisible color channel providing invisible light. The digital light projector including a projecting device projecting light from the plurality of color channels onto an environment in the form of an array of pixels which together form a video image including a visible image and an invisible image, the video image comprising a series of frames with each frame formed by the array of pixels, wherein to form each pixel of each frame the projecting device sequentially projects a series of light pulses from light provided by each of the plurality of color channels, with light pulses from the at least one visible color channel forming the visible image and light pulses from the at least one invisible color channel forming the invisible image.
    Type: Grant
    Filed: September 15, 2014
    Date of Patent: February 26, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: David Bradley Short, Robert L Mueller, Jinman Kang, Otto Sievert, Kurt Spears
  • Patent number: 10210286
    Abstract: Computer-implemented methods and systems of detecting curbs include receiving a cloud of three-dimensional (3D) data points acquired along street locations. A plurality of vertical scanlines (e.g., arrays of 3D data points obtained at given horizontal locations) are identified within the cloud of 3D data points. One or more curb points indicating the potential presence of a curb in the plurality of vertical scanlines are identified. A synthesized set of non-overlapping curb curves are generated in order to close gaps between certain curb points while removing certain other outlier curb points. Successive curb curves in the synthesized set of non-overlapping curb curves are then identified as belonging to one or more curb segments.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: February 19, 2019
    Assignee: Google LLC
    Inventors: Arthur Robert Pope, Ioannis Stamos
  • Patent number: 10210564
    Abstract: Embodiments provide a method for viewing online products in real-size. The method includes a computing device receiving one or more instructions to view an image of an online product in real-size on a selected surface. The device then determines one or more dimensions of the online product and projects the image of the online product on the selected surface, where one or more dimensions of the projected image are equal to the one or more dimensions of the online product.
    Type: Grant
    Filed: October 30, 2015
    Date of Patent: February 19, 2019
    Assignee: International Business Machines Corporation
    Inventors: Yanhong Qi, Hong Chuan Yuan, Jin Zhang
  • Patent number: 10210662
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: February 19, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
  • Patent number: 10203516
    Abstract: A dual-lens camera system is provided, including a first lens driving module and a second lens driving module each including a lens holder for receiving a lens, at least one magnetic element, and a driving board. The driving board has at least one driving coil for acting with the magnetic element to generate an electromagnetic force to move the lens holder along a direction that is perpendicular to the optical axis of the lens. On two adjacent sides parallel to each other of the first and second lens driving modules, the magnetic elements are arranged in different configurations.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: February 12, 2019
    Assignee: TDK TAIWAN CORP.
    Inventors: Chao-Chang Hu, Bing-Ru Song
  • Patent number: 10204953
    Abstract: An apparatus is described that includes an image sensor and a light source driver circuit having configuration register space to receive information pertaining to a command to simulate a distance between a light source and an object that is different than an actual distance between the light source and the object.
    Type: Grant
    Filed: October 12, 2017
    Date of Patent: February 12, 2019
    Assignee: Google LLC
    Inventors: Cheng-Yi Andrew Lin, Clemenz Portmann
  • Patent number: 10200677
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: February 5, 2019
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Patent number: 10190983
    Abstract: A light source may illuminate a scene with pulsed light that is pulsed non-periodically. The scene may include fluorescent material that fluoresces in response to the pulsed light. The pulsed light signal may comprise a maximum length sequence or Gold sequence. A lock-in time-of-flight sensor may take measurements of light returning from the scene. A computer may, for each pixel in the sensor, perform a Discrete Fourier Transform on measurements taken by the pixel, in order to calculate a vector of complex numbers for the pixel. Each complex number in the vector may encode phase and amplitude of incident light at the pixel and may correspond to measurements taken at a given time interval during the pulsed light signal. A computer may, based on phase of the complex numbers for a pixel, calculate fluorescence lifetime and scene depth of a scene point that corresponds to the pixel.
    Type: Grant
    Filed: April 14, 2017
    Date of Patent: January 29, 2019
    Assignee: Massachusetts Institute of Technology
    Inventors: Ayush Bhandari, Christopher Barsi, Achuta Kadambi, Ramesh Raskar
  • Patent number: 10186088
    Abstract: The methods, systems, techniques, and components described herein allow interactions with virtual objects in a virtual environment, such as a Virtual Reality (VR) environment or Augmented Reality (AR) environment, to be modeled accurately. More particularly, the methods, systems, techniques, and components described herein allow interactive virtual frames to be created for virtual objects in a virtual environment. The virtual frames may be built using line primitives that form frame boundaries based on shape boundaries of virtual objects enclosed by the virtual frame. An area of interactivity defined by the virtual frame may allow users to interact with the virtual object in the virtual environment.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: January 22, 2019
    Assignee: Meta Company
    Inventors: Zachary R. Kinstner, Rebecca B. Frank, Yishai Gribetz
  • Patent number: 10186545
    Abstract: An image sensor may include visible light detectors and a near-infrared light detector. The near-infrared light detector may contain a material highly sensitive to near-infrared rays, and thus the size of the near-infrared light detector may be reduced.
    Type: Grant
    Filed: August 23, 2016
    Date of Patent: January 22, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jaeho Lee, Kiyoung Lee, Sangyeob Lee, Eunkyu Lee, Jinseong Heo, Seongjun Park
  • Patent number: 10182220
    Abstract: A modeled object distribution management system includes a creator storage unit, a first display controller and a second display controller. The creator storage unit stores modeled object-related information including information indicating stereoscopic images of created modeled objects. The first display controller displays the stereoscopic images to allow a client to browse the stereoscopic images, by using the information indicating the stereoscopic images which is stored in the creator storage unit. The second display controller displays a modeling plan to allow the client to browse the modeling plan. The modeling plan includes a modeling method and a material which are used to model a modeled object corresponding to a stereoscopic image selected by the client from the stereoscopic images displayed in the first display controller.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: January 15, 2019
    Assignee: FUJI XEROX CO., LTD.
    Inventor: Kazunori Onishi
  • Patent number: 10176628
    Abstract: In a method for creating a 3D representation of a recording object it is provided, with respect to recorded individual images (2, 3) from a recording object, to identify feature pixels (6, 7, 8, 9) respectively in a computer-aided manner by a feature detection, to calculate in a computer-implemented manner with respect to this feature pixels (6, 7, 8, 9), on the basis of correspondences in terms of content, respectively 3D points (11, 12) and camera poses (3, 4) of the individual images (2, 3), to fit at least one geometric primitive (14) into the calculated 3D points (11, 12) in a computer-implemented manner and to check in a plausibility check (15) whether a minimum discrepancy between the individual images (2, 3) results for the geometric primitive (14), and to output the geometric primitive (14) for which the discrepancy is minimized.
    Type: Grant
    Filed: August 8, 2016
    Date of Patent: January 8, 2019
    Assignee: Testo AG
    Inventors: Jan-Friso Evers-Senne, Philipp Sasse, Hellen Altendorf, Raphael Bartsch
  • Patent number: 10175923
    Abstract: A display system includes one HMD and another HMD, and the one HMD includes an image display section adapted to display an image so that an outside view can visually be recognized, and an imaging section adapted to take an image of a range including at least a part of the outside view, which can visually be recognized in the image display section. Further, the one HMD includes a communication section, and a control section adapted to make the image display section display the image. The another HMD includes an image display section, a communication section, a display control section adapted to display the image based on the information received from the one HMD, an operation detection section, and a control section adapted to generate the guide information and transmit the guide information.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: January 8, 2019
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Kazuhiko Miyasaka, Naoki Kobayashi, Masahide Takano
  • Patent number: 10175650
    Abstract: Systems, methods, and computer-readable media are disclosed for controlling parameters of a holographic image. A gaze direction of a user is detected and user interaction data indicative of the gaze direction of the user is generated. A determination is then made using the user interaction data that the gaze direction of the user at least partially coincides with an object of interest. A further determination is made that the object of interest is at least partially obscured by the holographic image. One or more of the parameters of the holographic image are then adjusted to enhance visibility of the object of interest to the user.
    Type: Grant
    Filed: January 16, 2017
    Date of Patent: January 8, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Eric V. Kline, Sarbajit K. Rakshit
  • Patent number: 10175483
    Abstract: A system and method are disclosed for displaying virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. When a user is not focused on the virtual object, which may be a heads-up display, or HUD, the HUD may remain body locked to the user. As such, the user may explore and interact with a mixed reality environment presented by the head mounted display device without interference from the HUD. When a user wishes to view and/or interact with the HUD, the user may look at the HUD. At this point, the HUD may change from a body locked virtual object to a world locked virtual object. The user is then able to view and interact with the HUD from different positions and perspectives of the HUD.
    Type: Grant
    Filed: June 18, 2013
    Date of Patent: January 8, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tom G. Salter, Ben J. Sugden, Daniel Deptford, Robert L. Crocco, Jr., Brian E. Keane, Laura K. Massey, Alex Aben-Athar Kipman, Peter Tobias Kinnebrew, Nicholas Ferianc Kamuda
  • Patent number: 10168147
    Abstract: Apparatus for structured light scanning. The structured light includes one or more projected lines or other patterns. At least two independent emitters emit light for each projected line or pattern. Typically the at least two independent emitters are arranged in a row. The apparatus also includes a pattern generator for causing light from respective emitters of a given row to overlap along a pattern axis to form a projected pattern.
    Type: Grant
    Filed: August 10, 2015
    Date of Patent: January 1, 2019
    Assignee: Facebook, Inc.
    Inventors: Guy Raz, Nadav Grossinger, Nitay Romano
  • Patent number: 10162311
    Abstract: A device includes a substrate with a curvilinear perimeter segment adjoined to a plurality of facets, a display area, a border area surrounding the display area, and connection pads, divided into groups corresponding to the facets, in the border area. A flexible circuit board with arms coupled to the groups of connection pads is included. Another device includes a substrate having a display area, first connection pads within a border area peripheral to the display area, and a flexible circuit board having a first portion including second connection pads configured to be coupled to the first connection pads, and a second portion configured to accommodate a plurality of transmission lines extending from the second connection pads. An arc length of the first portion can be greater than that of the second portion and a center-to-center pitch of the second connection pads can be greater than that of the transmission lines.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: December 25, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: John Hong, Jan Bos, Yi-Nan Chu, Cheonhong Kim