Patents by Inventor Stephan Liwicki

Stephan Liwicki has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240054008
    Abstract: An apparatus for performing a task, the task being a sequence of actions performed to achieve a goal, the apparatus comprising: at least one sensor for obtaining observations of the apparatus; a controller configured to receive a control signal to move said apparatus; and a processor, said processor being configured to: receive information concerning the goal; determine the sequence of actions to reach said goal, the sequence of actions being subject to at least one constraint; and provide a control signal to said controller for the next action in said sequence of actions, wherein said processor is configured to determine the sequence of actions by processing observations received by said sensors to obtain information concerning the at least one constraint and performing stochastic optimisation to determine the sequence of actions where the at least one constraint is represented as a cost in said stochastic optimisation, the stochastic optimisation receiving an initial estimate of the next action.
    Type: Application
    Filed: August 12, 2022
    Publication date: February 15, 2024
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Harit PANDYA, Rudra POUDEL, Stephan LIWICKI
  • Patent number: 11734855
    Abstract: A computer implemented method for pose estimation of an image sensor includes receiving an omnidirectional image of a scene captured by an image sensor; using a trained neural network to generate a rotation equivariant feature map from the omnidirectional image of the scene; and determining information relating to the pose of said camera when capturing the scene from the rotation equivariant feature map. The rotation equivariant feature map is a SO(3) indexed feature map.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: August 22, 2023
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Chao Zhang, Ignas Budvytis, Stephan Liwicki
  • Publication number: 20230118864
    Abstract: A computer-implemented method for place recognition including: obtaining information identifying an image of a first scene; identifying a plurality of pixel clusters in the image; generating a set of feature vectors associated with the pixel clusters; generating a graph of the scene; adding a first edge between a first node and a second node in response to determining that a first property associated with a first pixel cluster is similar to a second property associated with a second pixel cluster; generating a vector representation of the graph; calculating a measure of similarity between the vector representation of the graph and a reference vector representation associated with a second scene; and determining that the first scene and the second scene are associated with a same place in response to determining that the measure of similarity is less than a threshold.
    Type: Application
    Filed: March 1, 2022
    Publication date: April 20, 2023
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Chao ZHANG, Ignas BUDVYTIS, Stephan LIWICKI
  • Publication number: 20220164986
    Abstract: A computer implemented method for pose estimation of an image sensor, the method comprising: receiving an omnidirectional image of a scene captured by an image sensor; using a trained neural network to generate a rotation equivariant feature map from said omnidirectional image of the scene; and determining information relating to the pose of said camera when capturing the scene from said rotation equivariant feature map, wherein the rotation equivariant feature map is a SO(3) indexed feature map.
    Type: Application
    Filed: November 20, 2020
    Publication date: May 26, 2022
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Chao ZHANG, Ignas BUDVYTIS, Stephan LIWICKI
  • Patent number: 11341722
    Abstract: A computer vision method for processing an omnidirectional image to extract understanding of a scene, the method comprising: receiving an omnidirectional image of a scene; mapping the omnidirectional image to a mesh on a three-dimensional polyhedron; convert the three dimensional polyhedron into a representation of a neighbourhood structure, wherein the representation of a neighbourhood structure represents vertices of said mesh and their neighbouring vertices; and processing the representation of the neighbourhood structure with a neural network processing stage to produce an output providing understanding of the scene, wherein the neural network processing stage comprising at least one module configured to perform convolution with a filter, aligned with a reference axis of the three-dimensional polyhedron.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: May 24, 2022
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Chao Zhang, Stephan Liwicki
  • Patent number: 11315253
    Abstract: An image processing method for segmenting an image, the method comprising: receiving an image; processing said image with a common processing stage to produce a first feature map; inputting said first feature map to a parallel processing stage, said second processing stage comprising first and second parallel branches that receive the first feature map; and combining the output of the first and second branches to produce a semantic segmented image, wherein the common processing stage comprises a neural network, the neural network having at least one separable convolution module configured to perform separable convolution and downsample the image to produce first feature map and said first branch comprises a neural network comprising at least one separable convolution module configured to perform separable convolution.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: April 26, 2022
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Rudra Prasad Poudel Karmatha, Stephan Liwicki, Roberto Cipolla
  • Publication number: 20220075383
    Abstract: A computer-implemented method for training an agent in a first context including an entity and an environment of the entity, to allow an apparatus to perform a navigation task in a second context comprising the apparatus and a physical environment of the apparatus, the apparatus adapted to receive images of the physical environment of the apparatus and comprising a steering device adapted to control the direction of the apparatus, the method comprising: obtaining one or more navigation tasks comprising: generating a navigation task; scoring the navigation task using a machine-learned model trained to estimate the easiness of tasks; in response to the score satisfying a selection criterion, selecting the navigation task as one of the one or more navigation tasks; and training the agent using a reinforcement learning method comprising attempting to perform, by the entity, the one or more navigation tasks using images of the environment of the entity.
    Type: Application
    Filed: February 24, 2021
    Publication date: March 10, 2022
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Steven MORAD, Roberto MECCA, Rudra POUDEL, Stephan LIWICKI, Roberto CIPOLLA
  • Publication number: 20210012567
    Abstract: A computer vision method for processing an omnidirectional image to extract understanding of a scene, the method comprising: receiving an omnidirectional image of a scene; mapping the omnidirectional image to a mesh on a three-dimensional polyhedron; convert the three dimensional polyhedron into a representation of a neighbourhood structure, wherein the representation of a neighbourhood structure represents vertices of said mesh and their neighbouring vertices; and processing the representation of the neighbourhood structure with a neural network processing stage to produce an output providing understanding of the scene, wherein the neural network processing stage comprising at least one module configured to perform convolution with a filter, aligned with a reference axis of the three-dimensional polyhedron.
    Type: Application
    Filed: July 7, 2020
    Publication date: January 14, 2021
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Chao ZHANG, Stephan Liwicki
  • Patent number: 10769744
    Abstract: An image processing method for segmenting an image, the method comprising: receiving first image; producing a second image from said first image, wherein said second image is a lower resolution representation of said first image; processing said first image with a first processing stage to produce a first feature map; processing said second image with a second processing stage to produce a second feature map; and combining the first feature map with the second feature map to produce a semantic segmented image; wherein the first processing stage comprises a first neural network comprising at least one separable convolution module configured to perform separable convolution and said second processing stage comprises a second neural network comprising at least one separable convolution module configured to perform separable convolution; the number of layers in the first neural network being smaller than the number of layers in the second neural network.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: September 8, 2020
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Rudra Prasad Poudel Karmatha, Ujwal Bonde, Stephan Liwicki, Christopher Zach
  • Publication number: 20200234447
    Abstract: An image processing method for segmenting an image, the method comprising: receiving an image; processing said image with a common processing stage to produce a first feature map; inputting said first feature map to a parallel processing stage, said second processing stage comprising first and second parallel branches that receive the first feature map; and combining the output of the first and second branches to produce a semantic segmented image, wherein the common processing stage comprises a neural network, the neural network having at least one separable convolution module configured to perform separable convolution and downsample the image to produce first feature map and said first branch comprises a neural network comprising at least one separable convolution module configured to perform separable convolution.
    Type: Application
    Filed: January 14, 2020
    Publication date: July 23, 2020
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Rudra Prasad Poudel KARMATHA, Stephan LIWICKI, Roberto CIPOLLA
  • Publication number: 20200134772
    Abstract: An image processing method for segmenting an image, the method comprising: receiving first image; producing a second image from said first image, wherein said second image is a lower resolution representation of said first image; processing said first image with a first processing stage to produce a first feature map; processing said second image with a second processing stage to produce a second feature map; and combining the first feature map with the second feature map to produce a semantic segmented image; wherein the first processing stage comprises a first neural network comprising at least one separable convolution module configured to perform separable convolution and said second processing stage comprises a second neural network comprising at least one separable convolution module configured to perform separable convolution; the number of layers in the first neural network being smaller than the number of layers in the second neural network.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Rudra Prasad POUDEL KARMATHA, Ujwal BONDE, Stephan LIWICKI, Christopher ZACH
  • Patent number: 10460471
    Abstract: A camera pose estimation method determines the translation and rotation between a first camera pose and a second camera pose. Features are extracted from a first image captured at the first position and a second image captured at the second position, the extracted features comprising location, scale information and a descriptor, the descriptor comprising information that allows a feature from the first image to be matched with a feature from the second image. Features are matched between the first image and the second image. The depth ratio of matched features is determined from the scale information. n matched features are selected, where at least one of the matched features is selected with both the depth ratio and location. The translation and rotation are calculated between the first camera pose and the second camera pose using the selected matched features with depth ratio derived from the scale information.
    Type: Grant
    Filed: July 18, 2017
    Date of Patent: October 29, 2019
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Stephan Liwicki, Christopher Zach
  • Publication number: 20190026916
    Abstract: A camera pose estimation method for determining the translation and rotation between a first camera pose and a second camera pose, the method comprising: extracting features from a first image captured at the first position and a second image captured at the second position, the extracted features comprising location, scale information and a descriptor, the descriptor comprising information that allows a feature from the first image to be matched with a feature from the second image; matching features between the first image and the second image to produce matched features; determining the depth ratio of matched features from the scale information, wherein the depth ratio is the ratio of the depth of a matched feature from the first position to the depth of the matched feature from the second position; selecting n matched features, where at least one of the matched features is selected with both the depth ratio and location; and calculating the translation and rotation between the first camera pose and the
    Type: Application
    Filed: July 18, 2017
    Publication date: January 24, 2019
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Stephan Liwicki, Christopher Zach
  • Publication number: 20150254527
    Abstract: A method for comparing a plurality of objects, the method comprising representing at least one feature of each object as a 3D ball representation, the radius of each ball representing the scale of the feature in the with respect to the frame of the object, the position of each ball representing the translation the feature in the frame of the object, the method further comprising comparing the objects by comparing the scale and translation as represented by the 3D balls to determine similarity between objects and their poses.
    Type: Application
    Filed: August 26, 2014
    Publication date: September 10, 2015
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Minh-Tri Pham, Frank Perbet, Bjorn Dietmar, Rafael Stenger, Riccardo Gherardi, Oliver Woodford, Sam Johnson, Roberto Cipolla, Stephan Liwicki