Patents by Inventor Christiaan Varekamp

Christiaan Varekamp has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11368663
    Abstract: An apparatus comprises a determiner (305) which determines a first-eye and a second eye view pose. A receiver (301) receives a reference first-eye image with associated depth values and a reference second-eye image with associated depth values, the reference first-eye image being for a first-eye reference pose and the reference second-eye image being for a second-eye reference pose. A depth processor (311) determines a reference depth value, and modifiers (307) generate modified depth values by reducing a difference between the received depth values and the reference depth value by an amount that depends on a difference between the second or first-eye view pose and the second or first-eye reference pose. A synthesizer (303) synthesizes an output first-eye image for the first-eye view pose by view shifting the reference first-eye image and an output second-eye image for the second-eye view pose by view shifting the reference second-eye image based on the modified depth values.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: June 21, 2022
    Assignee: Koninklijke Philips N.V.
    Inventor: Christiaan Varekamp
  • Publication number: 20220165015
    Abstract: An apparatus comprises a receiver (301) receiving an image signal representing a scene. The image signal includes image data comprising a number of images where each image comprises pixels that represent an image property of the scene along a ray having a ray direction from a ray origin. The ray origins are different positions for at least some pixels. The image signal further comprises a plurality of parameters describing a variation of the ray origins and/or the ray directions for pixels as a function of pixel image positions. A renderer (303) renders images from the number of images based on the plurality of parameters.
    Type: Application
    Filed: January 17, 2020
    Publication date: May 26, 2022
    Inventors: WILHELMUS HENDRICUS ALFONSUS BRULS, CHRISTIAAN VAREKAMP, BART KROON
  • Publication number: 20220148207
    Abstract: A method of processing depth maps comprises receiving (301) images and corresponding depth maps. The depth values of a first depth map of the corresponding depth maps are updated (303) based on depth values of at least a second depth map of the corresponding depth maps. The updating is based on a weighted combination of candidate depth values determined from other maps. A weight for a candidate depth value from the second depth map is determined based on the similarity between a pixel value in the first image corresponding to the depth being updated and a pixel value in a third image at a position determined by projecting the position of the depth value being updated to the third image using the candidate depth value. More consistent depth maps may be generated in this way.
    Type: Application
    Filed: March 3, 2020
    Publication date: May 12, 2022
    Inventors: CHRISTIAAN VAREKAMP, BARTHOLOMEUS WILHELMUS DAMIANUS VAN GEEST
  • Publication number: 20220139023
    Abstract: An apparatus comprises receivers (201, 203) receiving texture maps and meshes representing a scene from a first and second view point. An image generator (205) determines a light intensity image for a third view point based on the received data. A first view transformer (207) determines first image positions and depth values in the image for vertices of the first mesh and a second view transformer (209) determines second image positions and depth values for vertices of the second mesh. A first shader (211) determines a first light intensity value and a first depth value based on the first image positions and depth value, and a second shader (213) determines a second light intensity value and a second depth value from the second image positions depth values. A combiner (215) generates an output value as a weighted combination of the first and second light intensity values where the weighting of a light intensity value increases for an increasing depth value.
    Type: Application
    Filed: February 10, 2020
    Publication date: May 5, 2022
    Inventor: CHRISTIAAN VAREKAMP
  • Publication number: 20220137916
    Abstract: A distribution system comprises an audio server (101) for receiving incoming audio from remote clients (103) and for transmitting audio derived from the incoming audio to the remote clients (103). An audio apparatus comprises an audio a receiver (401) which receives data comprising: audio data for a plurality of audio components representing audio from a remote client of the plurality of remote clients; and proximity data for at least one of the audio components. The proximity data is indicative of proximity between remote clients. A generator (403) of the apparatus generates an audio mix from the audio components in response to the proximity data. For example, an audio component indicated to be proximal to a remote client may be excluded from an audio mix for that remote client.
    Type: Application
    Filed: July 2, 2019
    Publication date: May 5, 2022
    Inventors: CHRISTIAAN VAREKAMP, JEROEN GERARDUS HENRICUS KOPPENS, BART KROON, NATHAN SOUVIRAA-LABASTIE, ARNOLDUS WERNER JOHANNES OOMEN
  • Patent number: 11317124
    Abstract: An apparatus comprises a processor (301) providing a plurality of reference video streams for a plurality of reference viewpoints for a scene. A receiver (305) receives a viewpoint request from a remote client where the viewpoint request is indicative of a requested viewpoint. A generator (303) generates an output video stream comprising a first video stream with frames from a first reference video stream and a second video stream with frames from a second reference video stream. The frames of the second video stream are differentially encoded relative to the frames of the first video stream. A controller (307) selects the reference video stream for the first and second video streams in response to the viewpoint request and may be arranged to swap the reference video streams between being non-differentially encoded and being differentially encoded when the viewpoint request meets a criterion.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: April 26, 2022
    Assignee: Koninklijke Philips N.V.
    Inventors: Bartolomeus Wilhelmus Damianus Sonneveldt, Christiaan Varekamp
  • Patent number: 11310486
    Abstract: Three dimensional [3D] image data and auxiliary graphical data are combined for rendering on a 3D display (30) by detecting depth values occurring in the 3D image data, and setting auxiliary depth values for the auxiliary graphical data (31) adaptively in dependence of the detected depth values. The 3D image data and the auxiliary graphical data at the auxiliary depth value are combined based on the depth values of the 3D image data. First an area of attention (32) in the 3D image data is detected. A depth pattern for the area of attention is determined, and the auxiliary depth values are set in dependence of the depth pattern.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: April 19, 2022
    Assignee: Koninklijke Philips N.V.
    Inventors: Philip Steven Newton, Geradus Wilhelmus Theodorus Van Der Heijden, Wiebe De Haan, Johan Cornelis Talstra, Wilhelmus Hendrikus Alfonsus Bruls, Georgios Parlantzas, Marc Helbing, Christian Benien, Vasanth Philomin, Christiaan Varekamp
  • Publication number: 20220114782
    Abstract: An apparatus comprises a receiver (301) for receiving an image representation of a scene. A determiner (305) determines viewer poses for a viewer with respect to a viewer coordinate system. An aligner (307) aligns a scene coordinate system with the viewer coordinate system by aligning a scene reference position with a viewer reference position in the viewer coordinate system. A renderer (303) renders view images for different viewer poses in response to the image representation and the alignment of the scene coordinate system with the viewer coordinate system. An offset processor (309) determines the viewer reference position in response to an alignment viewer pose where the viewer reference position is dependent on an orientation of the alignment viewer pose and has an offset with respect to a viewer eye position for the alignment viewer pose. The offset includes an offset component in a direction opposite to a view direction of the viewer eye position.
    Type: Application
    Filed: January 19, 2020
    Publication date: April 14, 2022
    Inventors: FONS BRULS, CHRISTIAAN VAREKAMP, BART KROON
  • Publication number: 20220094953
    Abstract: A method of encoding a video data signal (15) is provided, together with a method for decoding. The encoding comprises providing color information (51) for pixels in an image, providing a depth map with depth information (52) for the pixels, providing transition information (56, 57, 60, 70, 71) being representative of a width (63, 73) of a transition region (61, 72) in the image, the transition region (61, 72) comprising a depth transition (62) and blended pixels in which colors of a foreground object and a background object are blended, and generating (24) the video data signal (15) comprising encoded data representing the color information (51), the depth map (52) and the transition information (56, 57, 60, 70, 71). The decoding comprises using the transition information (56, 57, 60, 70, 71) for determining the width (63, 73) of the transition regions (61, 72) and for determining alpha values (53) for pixels inside the transition regions (61, 72).
    Type: Application
    Filed: December 8, 2021
    Publication date: March 24, 2022
    Inventors: Wilhelmus Hendrikus Alfonsus Bruls, Christiaan Varekamp, Reinier Bernardus Maria Klein Gunnewiek
  • Publication number: 20220053222
    Abstract: An apparatus comprises a processor (301) providing a plurality of reference video streams for a plurality of reference viewpoints for a scene. A receiver (305) receives a viewpoint request from a remote client where the viewpoint request is indicative of a requested viewpoint. A generator (303) generates an output video stream comprising a first video stream with frames from a first reference video stream and a second video stream with frames from a second reference video stream. The frames of the second video stream are differentially encoded relative to the frames of the first video stream. A controller (307) selects the reference video stream for the first and second video streams in response to the viewpoint request and may be arranged to swap the reference video streams between being non-differentially encoded and being differentially encoded when the viewpoint request meets a criterion.
    Type: Application
    Filed: September 16, 2019
    Publication date: February 17, 2022
    Inventors: Bartolomeus Wilhelmus Damianus SONNEVELDT, Christiaan VAREKAMP
  • Patent number: 11232567
    Abstract: As the capabilities of digital histopathology machines grows, there is an increasing need to ease the burden on pathology professionals of finding interesting structures in such images. Digital histopathology images can be at least several Gigabytes in size, and they may contain millions of cell structures of interest. Automated algorithms for finding structures in such images have been proposed, such as the Active Contour Model (ACM). The ACM algorithm can have difficulty detecting regions in images having variable colour or texture distributions. Such regions are often found in images containing cell nuclei, because nuclei do not always have a homogeneous appearance. The present application describes a technique to identify inhomogeneous structures, for example, cell nuclei, in digital histopathology information. It is proposed to search pre-computed super-pixel information using a morphological variable, such as a shape-compactness metric, to identify candidate objects.
    Type: Grant
    Filed: May 24, 2017
    Date of Patent: January 25, 2022
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Christiaan Varekamp
  • Patent number: 11228704
    Abstract: An image capturing apparatus comprises a capture unit (101) for capturing images of a scene. A tracker (103) dynamically determines poses of the capture unit and a pose processor (105) determines a current pose of the capture unit (101) relative to a set of desired capture poses. The pose may include a position and orientation of the capture unit (101). A display controller (107) is coupled to the pose processor (105) and is arranged to control a display to present a current capture pose indication, the current capture pose indication comprising an indication of a position of the current pose of the capture unit (101) relative to the set of desired capture poses in a direction outside an image plane of capture unit (101). In some embodiments, the set of desired capture poses may be adaptively updated in response to data captured for the scene.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: January 18, 2022
    Assignee: Koninklijke Philips N.V.
    Inventors: Christiaan Varekamp, Bartholomeus Wilhelmus Sonneveldt
  • Patent number: 11218687
    Abstract: An apparatus comprises a receiver (401) receiving a first image and associated first depth data captured by a first depth-sensing camera. A detector (405) detects an image position property for a fiducial marker in the first image, the fiducial marker representing a placement of a second depth sensing image camera. A placement processor (407) determines a relative placement vector indicative of a placement of the second depth sensing image camera relative to the first depth-sensing camera in response to the image position property and depth data of the first depth data for an image position of the fiducial marker. A second receiver (403) receives a second image and second first depth data captured by the second depth sensing image camera. A generator (409) generates the representation of at least part the scene in response to a combination of at least the first image and the second image based on the relative placement vector.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: January 4, 2022
    Assignee: Koninklijke Philips N.V.
    Inventors: Christiaan Varekamp, Bart Kroon
  • Publication number: 20210407037
    Abstract: An apparatus a receiver (201) which receives a wide angle image with a first projection where a vertical image position of a scene position depends on a horizontal distance from the scene position to an optical axis for the image. Thus, the vertical image position of the scene point may depend on the horizontal image position. A mapper (203) generates a modified image having a modified projection by applying a mapping to the first wide angle image corresponding to a mapping from the first projection to a perspective projection followed by a non-linear vertical mapping from the perspective projection to a modified vertical projection of the modified projection and a non-linear horizontal mapping from the perspective projection to a modified horizontal projection of the modified projection. A disparity estimator (205) generates disparities for the modified image relative to a second image and representing a different view point than the first wide angle image.
    Type: Application
    Filed: November 4, 2019
    Publication date: December 30, 2021
    Inventor: Christiaan VAREKAMP
  • Publication number: 20210400314
    Abstract: An image synthesis apparatus comprises a receiver (301) for receiving image parts and associated depth data of images representing a scene from different view poses from an image source. A store (311) stores a depth transition metric for each image part of a set of image parts where the depth transition metric for an image part is indicative of a direction of a depth transition in the image part. A determiner (305) determines a rendering view pose and an image synthesizer (303) synthesizes at least one image from received image part. A selector is arranged to select a first image part of the set of image parts in response to the depth transition metric and a retriever (309) retrieves the first image part from the image source. The synthesis of an image part for the rendering view pose is based on the first image part.
    Type: Application
    Filed: September 16, 2019
    Publication date: December 23, 2021
    Inventor: CHRISTIAAN VAREKAMP
  • Publication number: 20210385422
    Abstract: An apparatus comprises a determiner (305) which determines a first-eye and a second eye view pose. A receiver (301) receives a reference first-eye image with associated depth values and a reference second-eye image with associated depth values, the reference first-eye image being for a first-eye reference pose and the reference second-eye image being for a second-eye reference pose. A depth processor (311) determines a reference depth value, and modifiers (307) generate modified depth values by reducing a difference between the received depth values and the reference depth value by an amount that depends on a difference between the second or first-eye view pose and the second or first-eye reference pose. A synthesizer (303) synthesizes an output first-eye image for the first-eye view pose by view shifting the reference first-eye image and an output second-eye image for the second-eye view pose by view shifting the reference second-eye image based on the modified depth values.
    Type: Application
    Filed: October 23, 2019
    Publication date: December 9, 2021
    Inventor: CHRISTIAAN VAREKAMP
  • Patent number: 11189079
    Abstract: An apparatus comprises a receiver (201) for receiving an image signal comprising a number of three dimensional images representing a scene from different viewpoints, each three dimensional image comprising a mesh and a texture map, the signal further comprising a plurality of residual data texture maps for a first viewpoint being different from the different viewpoints of the number of three dimensional images, a first residual data texture map of the plurality of residual texture maps providing residual data for a texture map for the first viewpoint relative to a texture map resulting from a viewpoint shift of a first reference texture map being a texture map of a first three dimensional image of the number of three dimensional images and a second residual data texture map of the plurality of residual texture maps providing residual data for a texture map for the first viewpoint relative to a texture map resulting from a viewpoint shift of a second reference texture map being a texture map of a second three
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: November 30, 2021
    Assignee: Koninklijke Philips N.V.
    Inventors: Christiaan Varekamp, Bart Kroon, Patrick Luc Els Vandewalle
  • Patent number: 11172183
    Abstract: An apparatus for processing a depth map comprises a receiver (203) receiving an input depth map. A first processor (205) generates a first processed depth map by processing pixels of the input depth map in a bottom to top direction. The processing of a first pixel comprises determining a depth value for the first pixel for the first processed depth map as the furthest backwards depth value of: a depth value for the first pixel in the input depth map, and a depth value determined in response to depth values in the first processed depth map for a first set of pixels being below the first pixel. The approach may improve the consistency of depth maps, and in particular for depth maps generated by combining different depth cues.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: November 9, 2021
    Assignee: Koninklijke Philips N.V.
    Inventors: Christiaan Varekamp, Wilhelmus Hendrikus Alfonsus Bruls
  • Patent number: 11122295
    Abstract: An apparatus comprises a store (201) storing a set of image parts and associated depth data for images representing a scene from different view poses (position and orientation). A predictability processor (203) generates predictability measures for image parts of the set of images for view poses of the scene. A predictability measure for a first image part for a first view pose is indicative of an estimate of the prediction quality for a prediction of at least part of an image for a viewport of the first view pose from a subset of image parts of the set of image parts not including the first image part. A selector (205) selects a subset of image parts of the set of image parts in response to the predictability measures, and a bitstream generator (207) for generating the image bitstream comprising image data and depth data from the subset of image parts.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: September 14, 2021
    Assignee: Koninklijke Philips N.V.
    Inventors: Bart Kroon, Christiaan Varekamp, Patrick Luc Els Vandewalle
  • Publication number: 20210259780
    Abstract: The invention refers to a system (100) for affecting a subject and a method for operating the system comprising a medical device like a biopsy gun (130) set in a first state in which it is not able to affect the subject based on a trigger signal or in a second state in which it is able to affect the subject based on the trigger signal, wherein the state is set based on a provided location of the medical device, i.e., for instance, is set based on the distance of the medical device to a target region. The location of the medical device is provided by a location providing unit (140) comprising, for instance, an imaging unit like an ultrasound system (142). Thus, unintended, possible dangerous injuries of the patient are prevented and the safety of the patient during the interventional procedure is increased.
    Type: Application
    Filed: May 5, 2019
    Publication date: August 26, 2021
    Applicant: Koninklijke Philips N.V.
    Inventors: Pieter Jan van der ZAAG, Christiaan VAREKAMP, Bernardus HENDRIKUS WILHELMUS HENDRIKS, John PEDERSEN