Patents by Inventor Edwards Bradley

Edwards Bradley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240178563
    Abstract: Multi-band phased array antennas include a backplane, a vertical array of low-band radiating elements that form a first antenna beam, first and second vertical arrays of high-band radiating elements that form respective second and third antenna beams and a vertical array of RF lenses. The first, second and third antenna beams point in different directions. A respective one of the second radiating elements and a respective one of the third radiating elements are positioned between the backplane and each RF lens, and at least some of the first radiating elements are positioned between the RF lenses.
    Type: Application
    Filed: February 6, 2024
    Publication date: May 30, 2024
    Inventors: Scott MICHAELIS, Igor TIMOFEEV, Edward BRADLEY
  • Patent number: 11988708
    Abstract: The disclosed technology generally relates to integrated circuit devices with wear out monitoring capability. An integrated circuit device includes a wear-out monitor device configured to record an indication of wear-out of a core circuit separated from the wear-out monitor device, wherein the indication is associated with localized diffusion of a diffusant within the wear-out monitor device in response to a wear-out stress that causes the wear-out of the core circuit.
    Type: Grant
    Filed: May 16, 2023
    Date of Patent: May 21, 2024
    Assignee: Analog Devices International Unlimited Company
    Inventors: Edward John Coyne, Alan J. O'Donnell, Shaun Bradley, David Aherne, David Boland, Thomas G. O'Dwyer, Colm Patrick Heffernan, Kevin B. Manning, Mark Forde, David J. Clarke, Michael A. Looby
  • Patent number: 11989971
    Abstract: Techniques are disclosed for capturing facial appearance properties. In some examples, a facial capture system includes light source(s) that produce linearly polarized light, at least one camera that is cross-polarized with respect to the polarization of light produced by the light source(s), and at least one other camera that is not cross-polarized with respect to the polarization of the light produced by the light source(s). Images captured by the cross-polarized camera(s) are used to determine facial appearance properties other than specular intensity, such as diffuse albedo, while images captured by the camera(s) that are not cross-polarized are used to determine facial appearance properties including specular intensity. In addition, a coarse-to-fine optimization procedure is disclosed for determining appearance and detailed geometry maps based on images captured by the cross-polarized camera(s) and the camera(s) that are not cross-polarized.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: May 21, 2024
    Assignee: Disney Enterprises, Inc.
    Inventors: Jeremy Riviere, Paulo Fabiano Urnau Gotardo, Abhijeet Ghosh, Derek Edward Bradley, Dominik Thabo Beeler
  • Publication number: 20240161540
    Abstract: One or more embodiments comprise a computer-implemented method that includes receiving an input image including one or more facial representations and a set of points on a 3D canonical shape, wherein the set of points are selectable at runtime, extracting a set of features from the input image that represent at least one facial representation included in the one or more facial representations, and determining a set of landmarks on the at least one facial representation based on the set of features and the set of points, wherein each landmark in the set of landmarks is associated with at least one point in the set of points.
    Type: Application
    Filed: November 8, 2023
    Publication date: May 16, 2024
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Publication number: 20240161391
    Abstract: The present invention sets forth a technique for generating two-dimensional (2D) renderings of a three-dimensional (3D) scene from an arbitrary camera position under arbitrary lighting conditions. This technique includes determining, based on a plurality of 2D representations of a 3D scene, a radiance field function for a neural radiance field (NeRF) model. This technique further includes determining, based on a plurality of 2D representations of a 3D scene, a radiance field function for a “one light at a time” (OLAT) model. The technique further includes rendering a 2D representation of the scene based on a given camera position and illumination data. The technique further includes computing a rendering loss based on the difference between the rendered 2D representation and an associated one of the plurality of 2D representations of the scene. The technique further includes modifying at least one of the NeRF and OLAT models based on the rendering loss.
    Type: Application
    Filed: November 8, 2023
    Publication date: May 16, 2024
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Yingyan XU, Gaspard ZOSS
  • Publication number: 20240159804
    Abstract: The disclosed technology generally relates to electrical overstress protection devices, and more particularly to electrical overstress monitoring devices for detecting electrical overstress events in semiconductor devices. In one aspect, an electrical overstress monitor and/or protection device includes a two different conductive structures configured to electrically are in response to an EOS event and a sensing circuit configured to detect a change in a physical property of the two conductive structures caused by the EOS event.
    Type: Application
    Filed: January 22, 2024
    Publication date: May 16, 2024
    Inventors: David J. Clarke, Stephen Denis Heffernan, Nijun Wei, Alan J. O'Donnell, Patrick Martin McGuinness, Shaun Bradley, Edward John Coyne, David Aherne, David M. Boland
  • Patent number: 11981764
    Abstract: There is described an acrylic polyester resin, obtainable by grafting an acrylic polymer with a polyester material. The polyester material is obtainable by polymerizing (i) a polyacid component, with (ii) a polyol component, including—2,2,4,4-tetraallcylcyclobutane-1,3-diol. One of the polyacid component or the polyol component comprises a functional monomer operable to impart functionality on to the polyester resin, such that an acrylic polymer may be grafted with the polyester material via the use of said functionality. Also provided is an aqueous coating composition comprising the acrylic polyester resin and a metal packaging containing coated with the composition.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: May 14, 2024
    Assignee: PPG Industries Ohio, Inc.
    Inventors: Adam Bradley Powell, William H. Retsch, Jr., Edward R. Millero, Jr., John M. Dudik, Christopher P. Kurtz, Michael Olah, Anand K. Atmuri
  • Publication number: 20240132747
    Abstract: A coated substrate comprising a coating extending over at least a part of the substrate. The coating is obtainable from a coating composition comprising (a) a polymeric binder; and (b) a feathering reducing agent comprising a carboxylic acid reactive functional group. The coated portion of the substrate comprises a pretreatment layer. The pretreatment layer is obtainable from a pretreatment composition that comprises a trivalent chromium compound. The invention extends to a method of coating a pre-treated substrate and a coated package, such as a metal can.
    Type: Application
    Filed: January 28, 2022
    Publication date: April 25, 2024
    Applicant: PPG Industries Ohio, Inc.
    Inventors: Adam Bradley Powell, Jr., Elzen Kurpejovic, Fengshuo Hu, Carl A. Seneker, Wenchao Zhang, Hongying Zhou, Edward R. Millero, Jr., Dennis A. Simpson, Michael G. Olah, Rudolf Baumgarten, Kareem Kaleem, Nigel F Masters, William H. Retsch, Jr.
  • Publication number: 20240101860
    Abstract: A coating composition comprising (a) a polyester binder material; and (b) a feathering reducing agent. The feathering reducing agent is selected from (i) an acrylic feathering reducing agent comprising a functional group selected from hydroxyl, epoxide, phosphatized epoxide and/or acid-functional; (ii) a hydroxy-functional polyester feathering reducing agent; (iii) a feathering reducing agent comprising a functional group selected from amine, amide, imine and/or nitrile; (iv) a phosphatized epoxy feathering reducing agent; (v) a phenolic resin feathering reducing agent; and/or (vi) a feathering reducing agent comprising an oxazolyl functional group. The invention extends to a substrate coated with the coating composition and use of the coating composition to reduce feathering.
    Type: Application
    Filed: January 28, 2022
    Publication date: March 28, 2024
    Applicant: PPG Industries Ohio, Inc.
    Inventors: Adam Bradley Powell, Jr., Elzen Kurpejovic, Fengshuo Hu, Carl A. Seneker, Wenchao Zhang, Hongying Zhou, Edward R. Millero, Jr., Dennis A. Simpson, Michael G. Olah, Rudolf Baumgarten, Kareem Kaleem, Nigel F Masters, William H. Retsch, Jr.
  • Patent number: 11908066
    Abstract: An image rendering method for rendering a pixel of a virtual scene at a viewpoint includes: downloading a machine learning system corresponding to a current or anticipated state an application determining the virtual scene to be rendered from among a plurality of machine learning systems corresponding to a plurality of states of the application; providing a position and a direction based on the viewpoint to the machine learning system previously trained to predict a factor; combining the predicted factor from the machine learning system with a distribution function that characterises an interaction of light with a predetermined surface to generate the pixel value corresponding to an illuminated first element of the virtual scene at the position; and incorporating the pixel value into a rendered image for display.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: February 20, 2024
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Marina Villanueva Barreiro, Andrew James Bigos, Gilles Christian Rainer, Fabio Cappello, Timothy Edward Bradley
  • Patent number: 11875441
    Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
    Type: Grant
    Filed: October 11, 2022
    Date of Patent: January 16, 2024
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Eftychios Dimitrios Sifakis, Gaspard Zoss
  • Patent number: 11836860
    Abstract: Methods and systems for performing facial retargeting using a patch-based technique are disclosed. One or more three-dimensional (3D) representations of a source character's (e.g., a human actor's) face can be transferred onto one or more corresponding representations of a target character's (e.g., a cartoon character's) face, enabling filmmakers to transfer a performance by a source character to a target character. The source character's 3D facial shape can separated into patches. For each patch, a patch combination (representing that patch as a combination of source reference patches) can be determined. The patch combinations and target reference patches can then be used to create target patches corresponding to the target character. The target patches can be combined using an anatomical local model solver to produce a 3D facial shape corresponding to the target character, effectively transferring a facial performance by the source character to the target character.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: December 5, 2023
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Prashanth Chandran, Loïc Florian Ciccone, Derek Edward Bradley
  • Patent number: 11759701
    Abstract: A system for generating video game inputs is provided. The system comprises an input unit operable to obtain images of a passive non-luminous object being held by a user as a video games controller. The system also comprises an object detector and object pose detector for detecting the object and its respective pose in the obtained images. The pose detector is configured to detect the pose of the object based on at least one of a (i) contour detection operation and (ii) the output of a machine learning model that has been trained to detect the poses of passive non-luminous objects in images. A user input generator is configured to generate user inputs based on the detected changes in pose of the passive non-luminous object and to transmit these to a video game unit at which a video game is being executed. A corresponding method is also provided.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: September 19, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Timothy Edward Bradley, David Erwan Damien Uberti
  • Patent number: 11748072
    Abstract: A data processing apparatus adapted to output recommendation information for modifying source code, includes: compiler circuitry to compile the source code and to output compiled code for the source code, processing circuitry to execute the compiled code, profile circuitry to monitor the execution of the compiled code by the processing circuitry and to generate profile information for the execution of the compiled code, the profile information including one or more statistical properties for the execution of the compiled code, and recommendation circuitry to output the recommendation information for the source code, the recommendation circuitry including a machine learning model to receive at least a portion of the profile information and trained to output the recommendation information for the source code in dependence upon one or more of the statistical properties, in which the recommendation information is indicative of one or more editing instructions for modifying the source code.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: September 5, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Gregory James Bedwell, Daryl Cooper, Timothy Edward Bradley, Guy Moss
  • Publication number: 20230260186
    Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject’s face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).
    Type: Application
    Filed: January 27, 2023
    Publication date: August 17, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Sebastian Winberg, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Derek Edward Bradley
  • Publication number: 20230252714
    Abstract: One embodiment of the present invention sets forth a technique for performing shape and appearance reconstruction. The technique includes generating a first set of renderings associated with an object based on a set of parameters that represent a reconstruction of the object in a first target image. The technique also includes producing, via a neural network, a first set of corrections associated with at least a portion of the set of parameters based on the first target image and the first set of renderings. The technique further includes generating an updated reconstruction of the object based on the first set of corrections.
    Type: Application
    Filed: February 10, 2022
    Publication date: August 10, 2023
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Christopher Andreas OTTO, Agon SERIFI, Gaspard ZOSS
  • Publication number: 20230237739
    Abstract: Methods and systems for performing facial retargeting using a patch-based technique are disclosed. One or more three-dimensional (3D) representations of a source character's (e.g., a human actor's) face can be transferred onto one or more corresponding representations of a target character's (e.g., a cartoon character's) face, enabling filmmakers to transfer a performance by a source character to a target character. The source character's 3D facial shape can separated into patches. For each patch, a patch combination (representing that patch as a combination of source reference patches) can be determined. The patch combinations and target reference patches can then be used to create target patches corresponding to the target character. The target patches can be combined using an anatomical local model solver to produce a 3D facial shape corresponding to the target character, effectively transferring a facial performance by the source character to the target character.
    Type: Application
    Filed: January 27, 2022
    Publication date: July 27, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Prashanth Chandran, Loïc Florian Ciccone, Derek Edward Bradley
  • Publication number: 20230237753
    Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).
    Type: Application
    Filed: January 27, 2023
    Publication date: July 27, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Derek Edward Bradley, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Prashanth Chandran, Sebastian Winberg
  • Patent number: 11688127
    Abstract: A data processing apparatus includes input circuitry to receive viewpoint data indicative of respective viewpoints for a plurality of spectators of a virtual environment, detection circuitry to detect a portion of the virtual environment viewed by each of the respective viewpoints in dependence upon the viewpoint data, selection circuitry to select one or more regions of the virtual environment in dependence upon at least some of the detected portions, and output circuitry to output data indicative of one or more of the selected regions.
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: June 27, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Maria Chiara Monti, Fabio Cappello, Matthew Sanders, Timothy Edward Bradley, Oliver Hume
  • Publication number: 20230196664
    Abstract: Various embodiments include a system for rendering an object, such as human skin or a human head, from captured appearance data. The system includes a processor executing a near field lighting reconstruction module. The system determines at least one of a three-dimensional (3D) position or a 3D orientation of a lighting unit based on a plurality of captured images of a mirror sphere. For each point light source in a plurality of point light sources included in the lighting unit, the system determines an intensity associated with the point light source. The system determines captures appearance data of the object, where the object is illuminated by the lighting unit. The system renders an image of the object based on the appearance data and the intensities associated with each point light source in the plurality of point light sources.
    Type: Application
    Filed: December 14, 2022
    Publication date: June 22, 2023
    Inventors: Paulo Fabiano URNAU GOTARDO, Derek Edward BRADLEY, Gaspard ZOSS, Jeremy RIVIERE, Prashanth CHANDRAN, Yingyan XU