Patents Assigned to Adobe System Incorporated
  • Patent number: 10163269
    Abstract: Certain embodiments involve enhancing personalization of a virtual-commerce environment by identifying an augmented-reality visual of the virtual-commerce environment. For example, a system obtains a data set that indicates a plurality of augmented-reality visuals generated in a virtual-commerce environment and provided for view by a user. The system obtains data indicating a triggering user input that corresponds to a predetermined user input provideable by the user as the user views an augmented-reality visual of the plurality of augmented-reality visuals. The system obtains data indicating a user input provided by the user. The system compares the user input to the triggering user input to determine a correspondence (e.g., a similarity) between the user input and the triggering user input. The system identifies a particular augmented-reality visual of the plurality of augmented-reality visuals that is viewed by the user based on the correspondence and stores the identified augmented-reality visual.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Gaurush Hiranandani, Chinnaobireddy Varsha, Sai Varun Reddy Maram, Kumar Ayush, Atanu Ranjan Sinha
  • Patent number: 10163003
    Abstract: Certain embodiments involve recognizing combinations of body shape, pose, and clothing in three-dimensional input images. For example, synthetic training images are generated based on user inputs. These synthetic training images depict different training figures with respective combinations of a body pose, a body shape, and a clothing item. A machine learning algorithm is trained to recognize the pose-shape-clothing combinations in the synthetic training images and to generate feature descriptors describing the pose-shape-clothing combinations. The trained machine learning algorithm is outputted for use by an image manipulation application. In one example, an image manipulation application uses a feature descriptor, which is generated by the machine learning algorithm, to match an input figure in an input image to an example image based on a correspondence between a pose-shape-clothing combination of the input figure and a pose-shape-clothing combination of an example figure in the example image.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Zhili Chen, Duygu Ceylan, Byungmoon Kim, Liwen Hu, Jimei Yang
  • Patent number: 10162829
    Abstract: Adaptive parallel data processing techniques are described. In one or more embodiments, a request is received to process a data file. The data file is split into multiple portions and sent to multiple nodes, where each node is configured to process a respective portion of the data file. Responsive to an amount of processing of the data file being completed, at least one of the multiple portions of the data file is dynamically split into multiple sub-portions. The sub-portions are submitted to one or more of the multiple nodes for processing of the sub-portions.
    Type: Grant
    Filed: September 3, 2013
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventor: Peter S. MacLeod
  • Patent number: 10163184
    Abstract: Techniques for providing enhanced graphics in a user interface by efficiently using enhanced graphics resources. A computing device displays the enhanced graphics in an upper view of the user interface and the enhanced graphics resources identify a visual region in which the enhanced graphics are positioned. The computing device accesses the enhanced graphics resources to identify and store a hit test region based on the visual region. The hit test region is stored separately from the enhanced graphics resources for hit testing. When a hit is received in the user interface, the computing device determines whether the upper view or lower view will respond to the hit based on the hit test region that is stored separately from the enhanced graphics resources.
    Type: Grant
    Filed: August 17, 2016
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: John Fitzgerald, Jesper Storm Bache
  • Patent number: 10165259
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Jimei Yang, Duygu Ceylan Aksit, Mehmet Ersin Yumer, Eunbyung Park
  • Patent number: 10162498
    Abstract: In some embodiments, a processor accesses electronic content that includes multiple selectable objects that are renderable in a graphical interface. The processor generates multiple selection areas respectively associated with the selectable objects. An input to the graphical interface received within each selection area selects an associated selectable object. Generating the selection areas includes generating a boundary around at least one of the selectable objects. Any point within the boundary is closer to the associated selectable object than any other selectable object. Generating the selection areas also includes clipping the boundary to define the selection area for the selectable object. The processor adds the selection areas to a document object model associated with the electronic content. The document object model is usable for rendering the graphical interface with the selectable objects and identifying the selection areas.
    Type: Grant
    Filed: February 21, 2017
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventor: Nathan Carl Ross
  • Patent number: 10163088
    Abstract: Data structures, methods, program products and systems for creating and executing an executable file for the Binary Runtime Environment for Wireless (BREW) where the file is capable of causing presentation of a document embedded in the file on a BREW system.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Rupen Chanda, Pruthvish Shankarappa
  • Patent number: 10163116
    Abstract: Embodiments of the present invention relate to a determination of a user's exclusiveness toward a particular brand. User-specific entities are extracted from social media content associated with a user. At least a portion of the user-specific entities are brand-related entities that are specifically relevant to a particular brand. These brand-related entities are analyzed with respect to the user-specific entities extracted from the social media content to determine a level of exclusivity of the user to the brand.
    Type: Grant
    Filed: August 1, 2014
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Niyati Chhaya, Kokil Jaidka
  • Publication number: 20180365856
    Abstract: Techniques and systems for digital image generation and capture hint data are described. In one example, a request is formed by an image capture device for capture hint data. The request describes a characteristic of an image scene that is to be a subject of a digital image. A communication is received via a network by the image capture device in response to the request. The communication includes capture hint data that is based at least in part of the characteristic. The capture hint data is displayed by a display device of the image capture device. The digital image of the image scene is then captured by the image capture device subsequent to the display of the capture hint data.
    Type: Application
    Filed: June 20, 2017
    Publication date: December 20, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Abhay Vinayak Parasnis, Oliver I. Goldman
  • Publication number: 20180364873
    Abstract: Inter-context coordination to facilitate synchronized presentation of image content is described. In example embodiments, an application includes multiple execution contexts that coordinate handling user interaction with a coordination policy established using an inter-context communication mechanism. The application produces first and second execution contexts that are responsible for user interaction with first and second image content, respectively. Generally, the second execution context provides a stipulation for the coordination policy to indicate which execution context is to handle a response to a given user input event. With an event routing policy, an event routing rule informs the first execution context if a user input event should be routed to the second execution context.
    Type: Application
    Filed: August 22, 2018
    Publication date: December 20, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Ian A. Wehrman, John N. Fitzgerald, Joel R. Brandt, Jesper Storm Bache, David A. Tristram, Barkin Aygun
  • Publication number: 20180365906
    Abstract: Image compensation for an occluding direct-view augmented reality system is described. In one or more embodiments, an augmented reality apparatus includes an emissive display layer for presenting emissive graphics to an eye of a user and an attenuation display layer for presenting attenuation graphics between the emissive display layer and a real-world scene to block light of the real-world scene from the emissive graphics. A light region compensation module dilates an attenuation graphic based on an attribute of an eye of a viewer, such as size of a pupil, to produce an expanded attenuation graphic that blocks additional light to compensate for an unintended light region. A dark region compensation module camouflages an unintended dark region with a replica graphic in the emissive display layer that reproduces an appearance of the real-world scene in the unintended dark region. A camera provides the light data used to generate the replica graphic.
    Type: Application
    Filed: August 27, 2018
    Publication date: December 20, 2018
    Applicant: Adobe Systems Incorporated
    Inventor: Gavin Stuart Peter Miller
  • Publication number: 20180367729
    Abstract: Digital image generation through use of capture support data is described. In one example, an image capture device is configured to obtain capture support data from an imaging support system via a network through inclusion of a pre-capture system. The pre-capture system, for instance, is configured to obtain capture support data from an imaging support system via a network. The capture support data is configured for use by digital image processor along with raw image data received from an image sensor to generate a digital image, e.g., that is configured for rendering.
    Type: Application
    Filed: June 20, 2017
    Publication date: December 20, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Abhay Vinayak Parasnis, Oliver I. Goldman
  • Publication number: 20180365709
    Abstract: Techniques are disclosed for generating personalized creator recommendations to viewers interested in viewing and interacting with creative works, in the context of a creative platform for publishing and viewing creative works. For each creator, a vector is generated indicating that creator's creative output with respect to a set of one or more creative fields. For each viewer, a vector is generated indicating that viewer's affinity with respect to the same set of creative fields. For a given viewer, a respective creator score is calculated based upon the vector associated with the viewer and the vector associated with that creator (e.g., based on a vector similarity computation). A ranking of each creator for the given viewer is then performed using the respective score, and a set of one or more personalized recommendations is then provided to the viewer based upon the ranking.
    Type: Application
    Filed: June 16, 2017
    Publication date: December 20, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Natwar Modani, Palak Agarwal, Gaurav Kumar Gupta, Deepali Jain, Ujjawal Soni
  • Publication number: 20180365874
    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
    Type: Application
    Filed: June 14, 2017
    Publication date: December 20, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
  • Patent number: 10157471
    Abstract: A computer-implemented method for visually aligning an object includes calculating a weighted distribution of a brightness of an object, determining a center point of the object using the weighted distribution of the brightness of the object and automatically aligning the object using the center point of the object.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: December 18, 2018
    Assignee: Adobe Systems Incorporated
    Inventor: Fabin Rasheed
  • Patent number: 10158682
    Abstract: Techniques for influencing power consumption of a client while streaming multimedia content from a server over a network are described. For example, a server push strategy is used to push a number of media segments of the multimedia content from the server to the client in response to a single request identifying one of the media segments. Thus, instead of using multiple requests, the media segments are provided to the client by using a single request. Reducing the number of requests influences (e.g., reduces) the power consumption of the client. To optimize the power consumption given current client, server, and/or network conditions, the number of the media segments to be pushed is computed based on parameters associated with these conditions.
    Type: Grant
    Filed: September 23, 2015
    Date of Patent: December 18, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Sheng Wei, Viswanathan Swaminathan
  • Publication number: 20180357245
    Abstract: Generating animated seek previews for panoramic videos is described. In one or more implementations, a video frame associated with a seek point of a panoramic video is received. The video frame is reverse projected to generate a 3D projection. Portions of the 3D projection are then formed that are centered on and span an equatorial axis, and each portion is projected to a 2D plane to generate 2D projections of the portions. Animation frames are generated based on the projected portions, and the animation frames are compiled into an animation for consumption by a user as an animated seek preview of the video frame corresponding to the seek point.
    Type: Application
    Filed: June 13, 2017
    Publication date: December 13, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Tulika Garg, Neeraj Goel
  • Publication number: 20180357519
    Abstract: A combined structure and style network is described. Initially, a large set of training images, having a variety of different styles, is obtained. Each of these training images is associated with one of multiple different predetermined style categories indicating the image's style and one of multiple different predetermined semantic categories indicating objects depicted in the image. Groups of these images are formed, such that each group includes an anchor image having one of the styles, a positive-style example image having the same style as the anchor image, and a negative-style example image having a different style. Based on those groups, an image style network is generated to identify images having desired styling by recognizing visual characteristics of the different styles. The image style network is further combined, according to a unifying training technique, with an image structure network configured to recognize desired objects in images irrespective of image style.
    Type: Application
    Filed: June 7, 2017
    Publication date: December 13, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Hailin Jin, John Philip Collomosse
  • Publication number: 20180359414
    Abstract: A technique for modifying digital video includes receiving a plurality of digital video frames recorded by a camera. Each frame has a spherical field of view and a viewing angle associated therewith, where the viewing angle is with respect to a fixed reference frame. A motion of the camera relative to the fixed reference frame is calculated across at least some of the digital video frames. The viewing angle associated with each digital video frame is reoriented during post-processing of the digital video frames based at least in part on the calculated motion of the camera and at least one constraint to produce a digitally modified video such the viewing angle associated with at least one of the reoriented digital video frames is different than the viewing angle associated with the same digital video frame before reorientation.
    Type: Application
    Filed: June 12, 2017
    Publication date: December 13, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Oliver Wang, Chengzhou Tang
  • Publication number: 20180357259
    Abstract: Sketch and style based image retrieval in a digital medium environment is described. Initially, a user sketches an object (e.g., with a stylus) to be searched in connection with an image search. Styled images are selected to indicate a desired style of images to be returned by the search. A search request is generated based on the sketch and selected images. Responsive to the request, an image repository is searched to identify images having the desired object and styling. To search the image repository, a neural network is utilized that is capable of recognizing the desired object in images based on visual characteristics of the sketch and independently recognizing the desired styling in images based on visual characteristics of the selected images. This independent recognition allows desired styling to be specified by selecting images having the style but not the desired object. Images having the desired object and styling are returned.
    Type: Application
    Filed: June 9, 2017
    Publication date: December 13, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Hailin Jin, John Philip Collomosse