Patents by Inventor Andrew James Bigos

Andrew James Bigos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240144436
    Abstract: A method and system for improving a resolution of an image frame is described. The image frame is part of a data stream. The data stream may be a video game, for example. The method comprising the following steps. Sending an encoded version of the image frame from a server to a client device via a first communication channel between the server and the client device. Detecting a reduced bandwidth of the first communication channel. In response to detecting the reduced bandwidth, determining data to be sent to the client device via a second communication channel. The second communication channel is separate to the first communication channel and the data relates to the image frame. The determined data is sent via the second communication channel.
    Type: Application
    Filed: October 16, 2023
    Publication date: May 2, 2024
    Applicant: Sony Interactive Entertainment Europe Limited
    Inventors: Andrew James Bigos, Daniel Montero Motilla
  • Patent number: 11908066
    Abstract: An image rendering method for rendering a pixel of a virtual scene at a viewpoint includes: downloading a machine learning system corresponding to a current or anticipated state an application determining the virtual scene to be rendered from among a plurality of machine learning systems corresponding to a plurality of states of the application; providing a position and a direction based on the viewpoint to the machine learning system previously trained to predict a factor; combining the predicted factor from the machine learning system with a distribution function that characterises an interaction of light with a predetermined surface to generate the pixel value corresponding to an illuminated first element of the virtual scene at the position; and incorporating the pixel value into a rendered image for display.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: February 20, 2024
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Marina Villanueva Barreiro, Andrew James Bigos, Gilles Christian Rainer, Fabio Cappello, Timothy Edward Bradley
  • Publication number: 20230325977
    Abstract: A computer-implemented method for image upscaling at a client device is provided. The method comprising: receiving, from a server device, an image which is one of a plurality of images forming an image stream, wherein the image comprises a plurality of image portions; determining a first group of one or more image portions from the plurality of image portions to apply a first image upscaling process to from a plurality of available image upscaling processes; selecting the first group of image portions based on the determination; and applying the first image upscaling process to the first group of image portions. The upscaling process may be an image upscaling process such as super resolution.
    Type: Application
    Filed: March 30, 2023
    Publication date: October 12, 2023
    Applicant: Sony Interactive Entertainment Europe Limited
    Inventors: Daniel Montero MOTILLA, Andrew James BIGOS
  • Publication number: 20230281912
    Abstract: A method of generating an image of a scene from a target viewpoint is provided. The method comprises capturing a plurality of multi-plane images of a virtual scene, and determining for each multi-plane image, whether a frontmost clipping plane pair defined for that multi-plane image intersects a virtual object in the virtual scene. Responsive to a positive determination for a respective clipping plane pair, the corresponding image data in the corresponding image plane is identified and assigned an identifier, thus generating a modified version of the corresponding multi-plane image. Camera information is obtained for the one or more virtual cameras that captured the multi-plane images and the target virtual camera. An output image is then generated by combining at least one of the modified multi-plane images with another of the multi-plane images in accordance with the obtained camera information. A corresponding system is also provided.
    Type: Application
    Filed: July 23, 2021
    Publication date: September 7, 2023
    Applicant: Sony Interactive Entertainment Europe Limited
    Inventor: Andrew James Bigos
  • Patent number: 11654633
    Abstract: A method of enhancing a 3D printed model includes generating successive visualisations of a virtual environment comprising a target object, receiving a user input indicating selection of a visualisation of the target object at a particular moment in time, generating visualisation data to enable subsequent visualisation of at least the target object as at the particular moment in time, causing the visualisation data to be stored at a unique location; generating 3D print model data for 3D printing of the target object, and associating data identifying the unique location of the stored visualisation data with the 3D print model data.
    Type: Grant
    Filed: July 4, 2018
    Date of Patent: May 23, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Andrew James Bigos
  • Publication number: 20230131366
    Abstract: The present disclosure relates to a computer-implemented method for completing an image, the method comprising the steps of dividing data of an image to be completed into a plurality of image portions. The method entails applying a first filling process to fill a first image portion comprising a first hole, the first hole associated with a first quantity and/or a first quality; and applying a second filling process to fill a second image portion comprising a second hole, the second hole associated with a second quantity different to the first quantity and/or a second quality different to the first quality, the second process being different to first process. The method then includes combining the filled first and second image portions to complete the image.
    Type: Application
    Filed: April 23, 2021
    Publication date: April 27, 2023
    Applicant: Sony Interactive Entertainment Europe Limited
    Inventors: Andrew James Bigos, Daniel Goldman, Cristian Craciun
  • Patent number: 11529762
    Abstract: A method of 3D print modelling includes: obtaining a target virtual object for 3D printing, evaluating the target virtual object for 3D printing with respect to one or more printability criteria to detect whether the 3D printed model would have one or more predetermined undesirable physical characteristics, and if so, generating 3D printable model data defining a transparent matrix to be printed surrounding the 3D printed model of the target virtual object.
    Type: Grant
    Filed: July 4, 2018
    Date of Patent: December 20, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Andrew James Bigos, Norihiro Nagai
  • Publication number: 20220309736
    Abstract: A method of generating a training set for a neural precomputed light model includes: generating a plurality of candidate viewpoints of a scene, culling candidate viewpoints according to a probability that depends upon a response of the surface of the scene to light at a surface position in the scene corresponding to the viewpoint, and generating training images at the remaining viewpoints.
    Type: Application
    Filed: March 21, 2022
    Publication date: September 29, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Andrew James Bigos, Gilles Christian Rainer, Cristian Craciun
  • Publication number: 20220309741
    Abstract: An image rendering method for an entertainment device for rendering a pixel at a viewpoint includes: for a first element of a virtual scene, having a predetermined surface at a position within that scene, obtaining a machine learning system previously trained to predict a factor that, when combined with a distribution function that characterises an interaction of light with the predetermined surface, generates a pixel value corresponding to the first element of the virtual scene as illuminated at the position; providing the position and a direction based on the viewpoint to the machine learning system; combining the predicted factor from the machine learning system with the distribution function to generate the pixel value corresponding to the illuminated first element of the virtual scene at the position; and incorporating the pixel value into a rendered image for display; where the obtaining step comprises: identifying a current or anticipated state of an application determining the virtual scene to be rend
    Type: Application
    Filed: March 21, 2022
    Publication date: September 29, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Marina Villanueva Barreiro, Andrew James Bigos, Gilles Christian Rainer, Fabio Cappello, Timothy Edward Bradley
  • Publication number: 20220309730
    Abstract: An image rendering method includes: selecting at least a first trained machine learning model from among a plurality of machine learning models, the machine learning model having been trained to generate data contributing to a render of at least a part of an image, where the at least first trained machine learning model has an architecture based learning capability that is responsive to at least a first aspect of a virtual environment for which it is trained to generate the data, and using the at least first trained machine learning model to generate data contributing to a render of at least a part of an image.
    Type: Application
    Filed: March 22, 2022
    Publication date: September 29, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Matthew Sanders, Andrew James Bigos
  • Publication number: 20220309740
    Abstract: An image rendering method for rendering a pixel at a viewpoint includes, for a first element of a virtual scene, having a predetermined surface at a position within that scene; providing the position and a direction based on the viewpoint to a machine learning system previously trained to predict a factor that, when combined with a distribution function that characterises an interaction of light with the predetermined surface, generates a pixel value corresponding to the first element of the virtual scene as illuminated at the position, combining the predicted factor from the machine learning system with the distribution function to generate the pixel value corresponding to the illuminated first element of the virtual scene at the position, and incorporating the pixel value into a rendered image for display, where the machine learning system was previously trained with a training set based on images comprising multiple lighting conditions.
    Type: Application
    Filed: March 18, 2022
    Publication date: September 29, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Matthew Sanders, Marina Villanueva Barreiro, Timothy Edward Bradley, Andrew James Bigos
  • Publication number: 20220309745
    Abstract: An image rendering method for rendering a pixel at a viewpoint includes: for a first element of a virtual scene, having a predetermined surface at a position within that scene, evaluating whether to render a pixel corresponding to the first element using a machine learning system having been trained to output a value representative of the lighting of the predetermined surface at the position, or using an alternative rendering approach, and rendering the pixel according to which of the machine learning system and the alternative rendering approach are chosen.
    Type: Application
    Filed: March 21, 2022
    Publication date: September 29, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Andrew James Bigos, Sahin Serdar Kocdemir
  • Publication number: 20220309742
    Abstract: An image rendering method for rendering a pixel at a viewpoint includes: for a first element of a virtual scene, having a predetermined surface at a position within that scene, evaluating whether to render a pixel corresponding to the first element using at least a first machine learning system having been trained to generate an illuminance output representative of the lighting of the predetermined surface at the position, or using an alternative rendering approach, and rendering the pixel according to which of the at least first machine learning system and the alternative rendering approach are chosen in the evaluating step; where the evaluating step comprises obtaining a confidence value from the at least first machine learning system indicative of the accuracy of the illuminance output, the machine learning system having been trained to generate the confidence value in conjunction with the illuminance output, and the rendering step comprises using the alternative rendering approach if the confidence value
    Type: Application
    Filed: March 21, 2022
    Publication date: September 29, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Andrew James Bigos, Sahin Serdar Kocdemir
  • Publication number: 20220309735
    Abstract: An image rendering method for a virtual scene includes, for a plurality of IDs, generating a respective mask identifying elements of the scene that are associated with a respective ID; for the resulting plurality of masks, dividing a respective mask into a plurality of tiles; and discarding tiles that do not identify any image elements; for the resulting plurality of remaining tiles, selecting a respective trained machine learning model from among a plurality of machine learning models, the respective machine learning model having been trained to generate data contributing to a render of at least a part of an image, based upon elements of the scene associated with the same respective ID as the elements identified in the mask from which the respective tile was divided; and using the respective trained machine learning model to generate data contributing to a render of at least a part of the image based upon input data at least for the identified elements in the respective tile.
    Type: Application
    Filed: March 18, 2022
    Publication date: September 29, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventor: Andrew James Bigos
  • Publication number: 20220309744
    Abstract: An image rendering method for rendering a pixel at a viewpoint including: for a first element of a virtual scene, having a predetermined surface at a position within that scene, providing the position and a direction based on the viewpoint to a machine learning system previously trained to predict a factor that, when combined with a distribution function that characterises an interaction of light with the predetermined surface, generates a pixel value corresponding to the first element of the virtual scene as illuminated at the position, combining the predicted factor from the machine learning system with the distribution function to generate the pixel value corresponding to the illuminated first element of the virtual scene at the position, and incorporating the pixel value into a rendered image for display.
    Type: Application
    Filed: March 21, 2022
    Publication date: September 29, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Andrew James Bigos, Gilles Christian Rainer
  • Patent number: 11315309
    Abstract: An apparatus includes: an object data storage section that stores polygon identification data for polygons of an object to be displayed; a reference image data storage section that stores data of reference images each representing an image when a space including the object to be displayed is viewed from one of a plurality of prescribed reference viewing points, and further stores polygon identification data corresponding to each reference image; a viewing point information acquisition section that acquires information relating to a viewing point; a projection section that represents on a plane of a display image the position and shape of an image of the object when the space is viewed from the viewing point; a pixel value determination section that determines the values of pixels constituting the image of the object in the display image, using the values of the pixels representing the same image in one or more of the plurality of reference images; and an output section that outputs the data of the display ima
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: April 26, 2022
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Jason Gordon Doig, Andrew James Bigos
  • Patent number: 11097486
    Abstract: A method of 3D print modelling includes: generating a point cloud representation of a target virtual object for 3D printing, generating a voxel representation of the point cloud by adding a respective first voxel when a respective first point from the point cloud representation is located in the notional volume occupied by that first voxel, assigning a first colour associated with the first point to the faces of the first voxel, and if a second point from the point cloud representation is located within the volume of the first voxel, assigning a second colour associated with the second point to a respective face of the new voxel corresponding to the normal of the second point; and thickening the voxel representation of the point cloud by generating a duplicate voxel adjacent to the first voxel along a first axis passing through two opposing faces of the first voxel respectively having the first and second colours.
    Type: Grant
    Filed: July 4, 2018
    Date of Patent: August 24, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Andrew James Bigos
  • Patent number: 11080922
    Abstract: A method of generating a model for 3D printing, includes selecting a target object within a virtual environment; sampling the target object to form a point cloud, the point cloud comprising points corresponding to an outer surface of the target object and also one or more internal features of the target object; rendering the point cloud from a plurality of viewpoints using voxels in place of the points in the point cloud; detecting which voxels and hence which points of the cloud were rendered over the plurality of renders; forming a surface-only point cloud comprising those points of the cloud that were rendered; and generating a model for 3D printing based on the surface-only point cloud.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: August 3, 2021
    Assignees: Sony Interactive Entertainment Inc., Sony Interactive Entertainment Europe Limited
    Inventor: Andrew James Bigos
  • Patent number: 10974459
    Abstract: A flood-fill method for parallel implementation includes the steps of receiving an image including first elements having a flood fill source colour to be replaced with a flood fill target colour, and second elements not having the flood fill source colour; dividing the image according to a hierarchical space-partitioning scheme to form cells at a plurality of levels in the hierarchy; detecting the occupancy of each cell by second elements; for any cell having no occupancy by second elements, set a depth elevation value for child cells of that cell to be one greater than the value for that cell, indicating that the flood fill shall navigate levels towards a root cell by the depth elevation amount of a cell whilst only occupying nodes without any second elements, thereby reaching the largest parent cell which can be flood-filled; and filling the flood fill source colour of cells having no occupancy by second elements with the flood fill target colour, thereby modifying the received image.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: April 13, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Robert Keith John Withey, Andrew James Bigos
  • Patent number: 10848733
    Abstract: An image generating device includes a rendering section that renders test images each representing an image when a space including an object to be displayed is viewed from one of a plurality of candidate reference viewing points; a candidate reference viewing point evaluation section that evaluates an importance value for the candidate reference viewing points as a function of at least their comparative coverage of points in the space; and an update section that changes the position of one or more candidate reference viewing points that have a low importance value, obtains a re-evaluation from the candidate reference viewing point evaluation section, and does not revert the position of a candidate reference viewing point if its comparative coverage of points in the space has increased.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: November 24, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Andrew James Bigos, Jason Gordon Doig