Patents by Inventor Jose Ignacio Echevarria Vallespi

Jose Ignacio Echevarria Vallespi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200357146
    Abstract: In some embodiments, a computing system generates a color gradient for data visualizations by displaying a color selection design interface. The computing system receives a user input identifying a start point of a color map path and an end point of a color map path. The computing system draws a color map path within the color space element between the start point and the end point constrained to traverse colors having uniform transitions between one or more of lightness, chroma, and hue. The computing system selects a color gradient having a first color corresponding to the start point of the color map path and a second color corresponding to the end point of the color map path, and additional colors corresponding to additional points along the color map path. The computing system generates a color map for visually representing a range of data values.
    Type: Application
    Filed: May 9, 2019
    Publication date: November 12, 2020
    Inventors: Jose Ignacio Echevarria Vallespi, Stephen DiVerdi, Hema Susmita Padala, Bernard Kerr, Dmitry Baranovskiy
  • Patent number: 10685479
    Abstract: Methods and systems for reconstructing surfaces of an object using regional level sets (RLS) are disclosed. A scanning system scans an object and generates a point cloud. An RLS is iteratively determined as solution to a differential equation constrained by the point cloud. The RLS is a 2-tuple, where the first component corresponds to a region identification and the second component corresponds to the solution of the differential equation. The space around the point cloud is iteratively segmented into a plurality of regions. A single solution to the differential equation is applied, encompassing all the regions. The solution in regions of the space corresponding to the finer structures within the point cloud are modeled similar to the coarser regions. The solution in a particular region is iteratively based on the solution in the neighboring regions. An RLS is enabled to reconstruct thinner or smaller structures or surfaces of the object.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: June 16, 2020
    Assignee: ADOBE INC.
    Inventors: Jose Ignacio Echevarria Vallespi, Byungmoon Kim
  • Publication number: 20200160567
    Abstract: In some embodiments, a computing system computes a palette-based color harmony and applies palette-based image recoloration by determining a color palette for the electronic image that includes a first image color at a first position on a color space and a second image color at a second position on a color space. The computing system applies a harmonic template using a combination of a global rotation angle and a secondary rotation angle, such that, the harmonic template, as applied, minimizes an aggregate of hue distances. The computing system modifies the color palette by moving at least one of (i) the first image color from the first position toward a position along a first axis of the harmonic template or (ii) the second image color from the second position toward a modified position along a second axis of the harmonic template. The computing system updates an editing interface.
    Type: Application
    Filed: November 21, 2018
    Publication date: May 21, 2020
    Inventor: Jose Ignacio Echevarria Vallespi
  • Patent number: 10607065
    Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: March 31, 2020
    Assignee: Adobe Inc.
    Inventors: Rebecca Ilene Milman, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shechtman, Duygu Ceylan Aksit, David P. Simons
  • Patent number: 10559116
    Abstract: Caricature generation techniques and systems are described that are configured to preserve an individuality identity of a subject of the caricature and thus overcome inaccuracies and failures of conventional techniques. In one example, a caricature generation system is employed by a computing device to determine deviations of facial features of a subject captured by a digital image from reference values. Caricature-specific blend shapes are then employed by the caricature generation system to generate a digital image caricature from a digital image based on these deviations. In one example, blend shapes include interaction rules that define an interplay of distortions that are jointly applied to at least two of the facial features of a subject in the digital image.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: February 11, 2020
    Assignee: Adobe Inc.
    Inventor: Jose Ignacio Echevarria Vallespi
  • Patent number: 10546429
    Abstract: An augmented reality (AR) mirror system is described. In an example, the AR mirror system includes a sensor, a display device, a semi-reflecting surface, a processing system, and computer-readable storage media having instructions stored thereon. The instructions are executable by the processing system to cause display of augmented reality (AR) digital content by the display device to be simultaneously viewable with a reflection of a physical object.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: January 28, 2020
    Assignee: Adobe Inc.
    Inventors: Tenell Glen Rhodes, Jr., Jose Ignacio Echevarria Vallespi, Gavin Stuart Peter Miller
  • Patent number: 10515456
    Abstract: Certain embodiments involve synthesizing image content depicting facial hair or other hair features based on orientation data obtained using guidance inputs or other user-provided guidance data. For instance, a graphic manipulation application accesses guidance data identifying a desired hair feature and an appearance exemplar having image data with color information for the desired hair feature. The graphic manipulation application transforms the guidance data into an input orientation map. The graphic manipulation application matches the input orientation map to an exemplar orientation map having a higher resolution than the input orientation map. The graphic manipulation application generates the desired hair feature by applying the color information from the appearance exemplar to the exemplar orientation map. The graphic manipulation application outputs the desired hair feature at a presentation device.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Duygu Ceylan Aksit, Zhili Chen, Jose Ignacio Echevarria Vallespi, Kyle Olszewski
  • Publication number: 20190340419
    Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
    Type: Application
    Filed: May 3, 2018
    Publication date: November 7, 2019
    Applicant: Adobe Inc.
    Inventors: Rebecca Ilene Milman, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shechtman, Duygu Ceylan Aksit, David P. Simons
  • Patent number: 10467822
    Abstract: Embodiments involve reducing collision-based defects in motion-stylizations. For example, a device obtains facial landmark data from video data. The facial landmark data includes a first trajectory traveled by a first point tracking one or more facial features, and a second trajectory traveled by a second point tracking one or more facial features. The device applies a motion-stylization to the facial landmark data that causes a first change to one or more of the first trajectory and the second trajectory. The device also identifies a new collision between the first and second points that is introduced by the first change. The device applies a modified stylization to the facial landmark data that causes a second change to one or more of the first trajectory and the second trajectory. If the new collision is removed by the second change, the device outputs the facial landmark data with the modified stylization applied.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: November 5, 2019
    Assignee: Adobe Inc.
    Inventors: Rinat Abdrashitov, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shectman, Duygu Ceylan Aksit, David Simons
  • Publication number: 20190295272
    Abstract: Certain embodiments involve synthesizing image content depicting facial hair or other hair features based on orientation data obtained using guidance inputs or other user-provided guidance data. For instance, a graphic manipulation application accesses guidance data identifying a desired hair feature and an appearance exemplar having image data with color information for the desired hair feature. The graphic manipulation application transforms the guidance data into an input orientation map. The graphic manipulation application matches the input orientation map to an exemplar orientation map having a higher resolution than the input orientation map. The graphic manipulation application generates the desired hair feature by applying the color information from the appearance exemplar to the exemplar orientation map. The graphic manipulation application outputs the desired hair feature at a presentation device.
    Type: Application
    Filed: March 22, 2018
    Publication date: September 26, 2019
    Inventors: Duygu Ceylan Aksit, Zhili Chen, Jose Ignacio Echevarria Vallespi, Kyle Olszewski
  • Publication number: 20190272668
    Abstract: Caricature generation techniques and systems are described that are configured to preserve an individuality identity of a subject of the caricature and thus overcome inaccuracies and failures of conventional techniques. In one example, a caricature generation system is employed by a computing device to determine deviations of facial features of a subject captured by a digital image from reference values. Caricature-specific blend shapes are then employed by the caricature generation system to generate a digital image caricature from a digital image based on these deviations. In one example, blend shapes include interaction rules that define an interplay of distortions that are jointly applied to at least two of the facial features of a subject in the digital image.
    Type: Application
    Filed: March 5, 2018
    Publication date: September 5, 2019
    Applicant: Adobe Inc.
    Inventor: Jose Ignacio Echevarria Vallespi
  • Publication number: 20190259214
    Abstract: Embodiments involve reducing collision-based defects in motion-stylizations. For example, a device obtains facial landmark data from video data. The facial landmark data includes a first trajectory traveled by a first point tracking one or more facial features, and a second trajectory traveled by a second point tracking one or more facial features. The device applies a motion-stylization to the facial landmark data that causes a first change to one or more of the first trajectory and the second trajectory. The device also identifies a new collision between the first and second points that is introduced by the first change. The device applies a modified stylization to the facial landmark data that causes a second change to one or more of the first trajectory and the second trajectory. If the new collision is removed by the second change, the device outputs the facial landmark data with the modified stylization applied.
    Type: Application
    Filed: February 20, 2018
    Publication date: August 22, 2019
    Inventors: Rinat Abdrashitov, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shectman, Duygu Ceylan Aksit, David Simons
  • Publication number: 20190251749
    Abstract: An augmented reality (AR) mirror system is described. In an example, the AR mirror system includes a sensor, a display device, a semi-reflecting surface, a processing system, and computer-readable storage media having instructions stored thereon. The instructions are executable by the processing system to cause display of augmented reality (AR) digital content by the display device to be simultaneously viewable with a reflection of a physical object.
    Type: Application
    Filed: February 14, 2018
    Publication date: August 15, 2019
    Applicant: Adobe Inc.
    Inventors: Tenell Glen Rhodes, JR., Jose Ignacio Echevarria Vallespi, Gavin Stuart Peter Miller
  • Publication number: 20190096124
    Abstract: Methods and systems for reconstructing surfaces of an object using regional level sets (RLS) are disclosed. A scanning system scans an object and generates a point cloud. An RLS is iteratively determined as solution to a differential equation constrained by the point cloud. The RLS is a 2-tuple, where the first component corresponds to a region identification and the second component corresponds to the solution of the differential equation. The space around the point cloud is iteratively segmented into a plurality of regions. A single solution to the differential equation is applied, encompassing all the regions. The solution in regions of the space corresponding to the finer structures within the point cloud are modeled similar to the coarser regions. The solution in a particular region is iteratively based on the solution in the neighboring regions. An RLS is enabled to reconstruct thinner or smaller structures or surfaces of the object.
    Type: Application
    Filed: November 26, 2018
    Publication date: March 28, 2019
    Inventors: Jose Ignacio Echevarria Vallespi, Byungmoon Kim
  • Patent number: 10169912
    Abstract: Methods and systems for reconstructing surfaces of an object using regional level sets (RLS) are disclosed. A scanning system scans an object and generates a point cloud. An RLS is iteratively determined as solution to a differential equation constrained by the point cloud. The RLS is a 2-tuple, where the first component corresponds to a region identification and the second component corresponds to the solution of the differential equation. The space around the point cloud is iteratively segmented into a plurality of regions. A single solution to the differential equation is applied, encompassing all the regions. The solution in regions of the space corresponding to the finer structures within the point cloud are modeled similar to the coarser regions. The solution in a particular region is iteratively based on the solution in the neighboring regions. An RLS is enabled to reconstruct thinner or smaller structures or surfaces of the object.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: January 1, 2019
    Assignee: Adobe Systems Incorporated
    Inventors: Jose Ignacio Echevarria Vallespi, Byungmoon Kim
  • Publication number: 20180114362
    Abstract: Methods and systems for reconstructing surfaces of an object using regional level sets (RLS) are disclosed. A scanning system scans an object and generates a point cloud. An RLS is iteratively determined as solution to a differential equation constrained by the point cloud. The RLS is a 2-tuple, where the first component corresponds to a region identification and the second component corresponds to the solution of the differential equation. The space around the point cloud is iteratively segmented into a plurality of regions. A single solution to the differential equation is applied, encompassing all the regions. The solution in regions of the space corresponding to the finer structures within the point cloud are modeled similar to the coarser regions. The solution in a particular region is iteratively based on the solution in the neighboring regions. An RLS is enabled to reconstruct thinner or smaller structures or surfaces of the object.
    Type: Application
    Filed: October 25, 2016
    Publication date: April 26, 2018
    Inventors: Jose Ignacio Echevarria Vallespi, Byungmoon Kim
  • Patent number: 9176662
    Abstract: Systems and methods for simulating liquid-on-lens effects may provide an interface through which users can add and/or manipulate fluids on a virtual camera lens. A physically based fluid simulation may simulate the behavior of the fluid as it is deposited on and/or manipulated on the virtual lens, and determine the distribution of the fluid across the lens. A ray tracing technique may be employed to determine how light is refracted through the virtual lens and the fluid, and to render a distorted output image as seen through the lens and the fluid. As the fluid is manipulated, corresponding changes in the image may be displayed in real time. The input image may be an existing single image or a direct camera feed (e.g., of a tablet type device). The user may select a fluid type and/or various fluid properties for the image editing operation.
    Type: Grant
    Filed: April 16, 2012
    Date of Patent: November 3, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Gregg D. Wilensky, Aravind Krishnaswamy, Jose Ignacio Echevarria Vallespi
  • Publication number: 20130120386
    Abstract: Systems and methods for simulating liquid-on-lens effects may provide an interface through which users can add and/or manipulate fluids on a virtual camera lens. A physically based fluid simulation may simulate the behavior of the fluid as it is deposited on and/or manipulated on the virtual lens, and determine the distribution of the fluid across the lens. A ray tracing technique may be employed to determine how light is refracted through the virtual lens and the fluid, and to render a distorted output image as seen through the lens and the fluid. As the fluid is manipulated, corresponding changes in the image may be displayed in real time. The input image may be an existing single image or a direct camera feed (e.g., of a tablet type device). The user may select a fluid type and/or various fluid properties for the image editing operation.
    Type: Application
    Filed: April 16, 2012
    Publication date: May 16, 2013
    Inventors: Gregg D. Wilensky, Aravind Krishnaswamy, Jose Ignacio Echevarria Vallespi