Patents by Inventor Christiaan Varekamp

Christiaan Varekamp has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210264658
    Abstract: An apparatus comprises a store (209) storing a set of anchor poses for a scene, as well as typically 3D image data for the scene. A receiver (201) receives viewer poses for a viewer and a render pose processor (203) determines a render pose in the scene for a current viewer pose of the viewer pose where the render pose is determined relative to a reference anchor pose. A retriever (207) retrieves 3D image data for the reference anchor pose and a synthesizer (205) synthesizes images for the render pose in response to the 3D image data. A selector selects the reference anchor pose from the set of anchor poses and is arranged to switch the reference anchor pose from a first anchor pose of the set of anchor poses to a second anchor pose of the set of anchor poses in response to the viewer poses.
    Type: Application
    Filed: June 20, 2019
    Publication date: August 26, 2021
    Inventors: Christiaan Varekamp, Patrick Luc Els VANDEWALLE
  • Publication number: 20210224990
    Abstract: As the capabilities of digital histopathology machines grows, there is an increasing need to ease the burden on pathology professionals of finding interesting structures in such images. Digital histopathology images can be at least several Gigabytes in size, and they may contain millions of cell structures of interest. Automated algorithms for finding structures in such images have been proposed, such as the Active Contour Model (ACM). The ACM algorithm can have difficulty detecting regions in images having variable colour or texture distributions. Such regions are often found in images containing cell nuclei, because nuclei do not always have a homogeneous appearance. The present application describes a technique to identify inhomogeneous structures, for example, cell nuclei, in digital histopathology information. It is proposed to search pre-computed super-pixel information using a morphological variable, such as a shape-compactness metric, to identify candidate objects.
    Type: Application
    Filed: May 24, 2017
    Publication date: July 22, 2021
    Inventor: CHRISTIAAN VAREKAMP
  • Patent number: 11050991
    Abstract: An apparatus comprises a store (201) storing images corresponding to different positions and viewing directions for a scene, and associated position parameter vectors for the images where the vector for an image comprises data indicative of a viewing position and a viewing direction. A receiver (205) receives a viewing position parameter vector from a remote client (101). A selector (207) selects a set of images in response to a comparison of the viewing position parameter vector and the associated position parameter vectors. An image synthesizer (209) generates an image from the set of images. A data generator (215) generates a reference position parameter vector for the synthesized image indicating a viewing position and direction for the synthesized image. An image encoder (211) encodes the synthesized image and an output generator (213) generates an output image signal comprising the encoded synthesized image and the reference position parameter vector.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: June 29, 2021
    Assignee: Koninklijke Philips N.V.
    Inventors: Christiaan Varekamp, Patrick Luc Els Vandewalle
  • Patent number: 11048911
    Abstract: The present invention relates to an apparatus (10) for determining cellular composition in one or more tissue sample microscopic images. It is described to provide (210) first image data of at least one tissue sample. The first image data relates to a non-specific staining of a tissue sample of the at least one tissue sample. Second image data of the at least one tissue sample is provided (220). The second image data relates to specific staining of a tissue sample of the at least one tissue sample. Either (i) the tissue sample that has undergone specific staining is the same as the tissue sample that has undergone non-specific staining, or (ii) the tissue sample that has undergone specific staining is different to the tissue sample that has undergone non-specific staining. A non-specific cellular composition cell density map is determined (230) on the basis of the first image data. A specific cellular composition cell density map is determined (240) on the basis of the second image data.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: June 29, 2021
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Christiaan Varekamp, Reinhold Wimberger-Friedl, David Halter
  • Publication number: 20210150802
    Abstract: An apparatus comprises a receiver (201) for receiving an image signal comprising a number of three dimensional images representing a scene from different viewpoints, each three dimensional image comprising a mesh and a texture map, the signal further comprising a plurality of residual data texture maps for a first viewpoint being different from the different viewpoints of the number of three dimensional images, a first residual data texture map of the plurality of residual texture maps providing residual data for a texture map for the first viewpoint relative to a texture map resulting from a viewpoint shift of a first reference texture map being a texture map of a first three dimensional image of the number of three dimensional images and a second residual data texture map of the plurality of residual texture maps providing residual data for a texture map for the first viewpoint relative to a texture map resulting from a viewpoint shift of a second reference texture map being a texture map of a second three
    Type: Application
    Filed: June 19, 2018
    Publication date: May 20, 2021
    Inventors: Christiaan Varekamp, Bart Kroon, Patrick Luc Els Vandewalle
  • Publication number: 20210152802
    Abstract: An apparatus comprises a receiver (401) receiving a first image and associated first depth data captured by a first depth-sensing camera. A detector (405) detects an image position property for a fiducial marker in the first image, the fiducial marker representing a placement of a second depth sensing image camera. A placement processor (407) determines a relative placement vector indicative of a placement of the second depth sensing image camera relative to the first depth-sensing camera in response to the image position property and depth data of the first depth data for an image position of the fiducial marker. A second receiver (403) receives a second image and second first depth data captured by the second depth sensing image camera. A generator (409) generates the representation of at least part the scene in response to a combination of at least the first image and the second image based on the relative placement vector.
    Type: Application
    Filed: August 7, 2018
    Publication date: May 20, 2021
    Inventors: CHRISTIAAN VAREKAMP, BART KROON
  • Publication number: 20210127057
    Abstract: An image capturing apparatus comprises a capture unit (101) for capturing images of a scene. A tracker (103) dynamically determines poses of the capture unit and a pose processor (105) determines a current pose of the capture unit (101) relative to a set of desired capture poses. The pose may include a position and orientation of the capture unit (101). A display controller (107) is coupled to the pose processor (105) and is arranged to control a display to present a current capture pose indication, the current capture pose indication comprising an indication of a position of the current pose of the capture unit (101) relative to the set of desired capture poses in a direction outside an image plane of capture unit (101). In some embodiments, the set of desired capture poses may be adaptively updated in response to data captured for the scene.
    Type: Application
    Filed: November 30, 2018
    Publication date: April 29, 2021
    Inventors: CHRISTIAAN VAREKAMP, BARTHOLOMEUS WILHELMUS SONNEVELDT
  • Patent number: 10944952
    Abstract: An apparatus comprises receiver (101) receiving a light intensity image, confidence map, and image property map. A filter unit (103) is arranged to filter the image property map in response to the light intensity image and the confidence map. Specifically, for a first position, the filter unit (103) determines a combined neighborhood image property value in response to a weighted combination of neighborhood image property values in a neighborhood around the first position, the weight for a first neighborhood image property value at a second position being dependent on a confidence value for the first neighborhood image property value and a difference between light intensity values for the first position and for the second position; and determines a first filtered image property value for the first position as a combination of a first image property value at the first position in the image property map and the combined neighbor image property value.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: March 9, 2021
    Assignee: Koninklijke Philips N.V.
    Inventors: Christiaan Varekamp, Patrick Luc Els Vandewalle
  • Patent number: 10931928
    Abstract: An apparatus comprising a store (101) for storing route data for a set of routes in an N-dimensional space where each route of the set of routes is associated with a video item including frames comprising both image and depth information. An input (105) receives a viewer position indication and a selector (107) selects a first route of the set of routes in response to a selection criterion dependent on a distance metric dependent on the viewer position indication and positions of the routes of the set of routes. A retriever (103, 109) retrieves a first video item associated with the first route from a video source (203). An image generator (111) generates at least one view image for the viewer position indication from a first set of frames from the first video item. In the system, the selection criterion is biased towards a currently selected route relative to other routes of the set of routes.
    Type: Grant
    Filed: March 2, 2018
    Date of Patent: February 23, 2021
    Assignee: Koninklijke Philips N.V.
    Inventors: Christiaan Varekamp, Bart Kroon
  • Patent number: 10893259
    Abstract: An apparatus for generating a three-dimensional image representation of a scene comprises a receiver (301) receiving a tiled three-dimensional image representation of a scene from a first viewpoint, the representation comprising a plurality of interconnected tiles, each tile comprising a depth map and a texture map representing a viewport of the scene from the first viewpoint and the tiles forming a tiling pattern. A first processor (311) determines neighboring border regions in at least a first tile and in a second tile in response to the tiling pattern. A second processor (309) modifies at least a first depth value of a first border region of the first tile in response to at least a second depth value in a second border region of the second tile, the border regions being neighboring regions. The modified depth maps may be used to generate a mesh based on which images may be generated.
    Type: Grant
    Filed: July 17, 2018
    Date of Patent: January 12, 2021
    Assignee: Koninklijke Philips N.V.
    Inventor: Christiaan Varekamp
  • Publication number: 20200413097
    Abstract: An apparatus comprises a store (201) storing a set of image parts and associated depth data for images representing a scene from different view poses (position and orientation). A predictability processor (203) generates predictability measures for image parts of the set of images for view poses of the scene. A predictability measure for a first image part for a first view pose is indicative of an estimate of the prediction quality for a prediction of at least part of an image for a viewport of the first view pose from a subset of image parts of the set of image parts not including the first image part. A selector (205) selects a subset of image parts of the set of image parts in response to the predictability measures, and a bitstream generator (207) for generating the image bitstream comprising image data and depth data from the subset of image parts.
    Type: Application
    Filed: January 4, 2019
    Publication date: December 31, 2020
    Inventors: BART KROON, CHRISTIAAN VAREKAMP, PATRICK LUC ELS VANDEWALLE
  • Publication number: 20200281574
    Abstract: The invention relates to a biopsy container (10) with an identification mark (13), which comprises one or more curves that surround the biopsy container. Identification information of the biopsy container may be encoded in a plurality of widths of the curves surrounding the biopsy container as well as in distances between these curves. Additionally, the biopsy container may comprise an alignment mark (11), which is configured to facilitate the registration of images of the biopsy container. The invention also relates to systems configured to determine identification information of a biopsy container and an image processing method.
    Type: Application
    Filed: November 9, 2018
    Publication date: September 10, 2020
    Inventors: PIETER JAN VAN DER ZAAG, CHRISTIAAN VAREKAMP
  • Patent number: 10699466
    Abstract: A method of generating an image comprises receiving (301, 303) a first and second texture map and mesh representing a scene from a first view point and second view point respectively. A light intensity image is generated (305) for a third view point. For a first position this includes determining (401, 403) a first and second light intensity value for the first position by a view point transformation based on the first texture map and the first mesh and on the second texture map and the second mesh respectively. The light intensity value is then determined (405) by a weighted combination of the first and second light intensity values. The weighting depends on a depth gradient in the first mesh at a first mesh position corresponding to the first position relative to a depth gradient in the second mesh at a second mesh position corresponding to the first position.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: June 30, 2020
    Assignee: Koninklijke Philips N.V.
    Inventor: Christiaan Varekamp
  • Publication number: 20200177868
    Abstract: An apparatus for generating a three-dimensional image representation of a scene comprises a receiver (301) receiving a tiled three-dimensional image representation of a scene from a first viewpoint, the representation comprising a plurality of interconnected tiles, each tile comprising a depth map and a texture map representing a viewport of the scene from the first viewpoint and the tiles forming a tiling pattern. A first processor (311) determines neighboring border regions in at least a first tile and in a second tile in response to the tiling pattern. A second processor (309) modifies at least a first depth value of a first border region of the first tile in response to at least a second depth value in a second border region of the second tile, the border regions being neighboring regions. The modified depth maps may be used to generate a mesh based on which images may be generated.
    Type: Application
    Filed: July 17, 2018
    Publication date: June 4, 2020
    Inventor: Christiaan Varekamp
  • Patent number: 10607351
    Abstract: An apparatus for determining a depth map for an image of a scene comprises an active depth sensor (103) and a passive depth sensor (105) for determining a depth map for the image. The apparatus further comprises a light determiner (109) which determines a light indication indicative of a light characteristic for the scene. The light indication may specifically reflect a level of visible and/or infrared light for the scene. A depth map processor (107) determines an output depth map for the image by combining the first depth map and the second depth map. Specifically, a depth value for the output depth map is determined as a combination of depth values of the first and the second depth map where the combining is dependent on the light indication. The light determiner (109) estimates the first infrared light indication from a visible light indication indicative of a light level in a frequency band of the visible light spectrum.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: March 31, 2020
    Assignee: Koninklijke Philips N.V.
    Inventors: Christiaan Varekamp, Patrick Luc Els Vandewalle
  • Publication number: 20200077071
    Abstract: An apparatus for processing a depth map comprises a receiver (203) receiving an input depth map. A first processor (205) generates a first processed depth map by processing pixels of the input depth map in a bottom to top direction. The processing of a first pixel comprises determining a depth value for the first pixel for the first processed depth map as the furthest backwards depth value of: a depth value for the first pixel in the input depth map, and a depth value determined in response to depth values in the first processed depth map for a first set of pixels being below the first pixel. The approach may improve the consistency of depth maps, and in particular for depth maps generated by combining different depth cues.
    Type: Application
    Filed: April 26, 2018
    Publication date: March 5, 2020
    Inventors: CHRISTIAAN VAREKAMP, WILHELMUS HENDRIKUS ALFONSUS BRULS
  • Patent number: 10580154
    Abstract: An apparatus for determining a depth map for an image comprises an image unit (105) which provides an image with an associated depth map comprising depth values for at least some pixels of the image. A probability unit (107) determines a probability map for the image comprising probability values indicative of a probability that pixels belong to a text image object. A depth unit (109) generates a modified depth map where the modified depth values are determined as weighted combinations of the input values and a text image object depth value corresponding to a preferred depth for text. The weighting is dependent on the probability value for the pixels. The approach provides a softer depth modification for text objects resulting in reduced artefacts and degradations e.g. when performing view shifting using depth maps.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: March 3, 2020
    Assignee: Koninklijke Philips N.V.
    Inventor: Christiaan Varekamp
  • Publication number: 20200045287
    Abstract: An apparatus comprising a store (101) for storing route data for a set of routes in an N-dimensional space where each route of the set of routes is associated with a video item including frames comprising both image and depth information. An input (105) receives a viewer position indication and a selector (107) selects a first route of the set of routes in response to a selection criterion dependent on a distance metric dependent on the viewer position indication and positions of the routes of the set of routes. A retriever (103, 109) retrieves a first video item associated with the first route from a video source (203). An image generator (111) generates at least one view image for the viewer position indication from a first set of frames from the first video item. In the system, the selection criterion is biased towards a currently selected route relative to other routes of the set of routes.
    Type: Application
    Filed: March 2, 2018
    Publication date: February 6, 2020
    Inventors: CHRISTIAAN VAREKAMP, BART KROON
  • Patent number: 10546421
    Abstract: An apparatus is arranged to generate a triangle mesh for a three dimensional image. The apparatus includes a depth map source (101) which provides a depth map and a tree generator (105) generates a k-D tree from the depth map. The k-D tree representing a hierarchical arrangement of regions of the depth map satisfying a requirement that a depth variation measure for undivided regions is below a threshold. A triangle mesh generator (107) positions an internal vertex within each region of the k-D tree. The triangle mesh is then generated by forming sides of triangles of the triangle mesh as lines between internal vertices of neighboring regions. The approach may generate an improved triangle mesh that is suitable for many 3D video processing algorithms.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: January 28, 2020
    Assignee: Koninklijke Philips N.V.
    Inventors: Christiaan Varekamp, Patrick Luc Els Vandewalle, Bart Kroon
  • Publication number: 20190385352
    Abstract: A method of generating an image comprises receiving (301, 303) a first and second texture map and mesh representing a scene from a first view point and second view point respectively. A light intensity image is generated (305) for a third view point. For a first position this includes determining (401, 403) a first and second light intensity value for the first position by a view point transformation based on the first texture map and the first mesh and on the second texture map and the second mesh respectively. The light intensity value is then determined (405) by a weighted combination of the first and second light intensity values. The weighting depends on a depth gradient in the first mesh at a first mesh position corresponding to the first position relative to a depth gradient in the second mesh at a second mesh position corresponding to the first position.
    Type: Application
    Filed: November 28, 2017
    Publication date: December 19, 2019
    Inventor: CHRISTIAAN VAREKAMP