Animation Patents (Class 345/473)
-
Patent number: 11164474Abstract: Disclosed herein are methods and systems for user-interface-assisted composition construction. In an embodiment, a plurality of input fields, each having an argument-element type, is presented via a client-side user interface, initially arranged according to a predefined sequence. Textual inputs are received via the input fields, and corresponding argument-building-block elements are responsively presented via the client-side user interface according to a current arrangement on the client-side user interface of the input fields. Each presented argument-building-block element has the same argument-element type and the received textual input of the corresponding user-interface input field.Type: GrantFiled: February 3, 2017Date of Patent: November 2, 2021Assignee: ThinkCERCA.com, Inc.Inventors: Eileen Murphy, Joshua Tolman
-
Patent number: 11161246Abstract: The present disclosure provides a robot path planning method as well as an apparatus and a robot using the same. The method includes: obtaining a grid map and obtaining a position of obstacle and a position of track in the grid map; determining a cost of grids of the grid map based on the position of obstacle and the position of track; generating a grid cost map based on the cost of the grids and the grid map; and planning a global path of the robot from a current position to a destination position based on the grid cost map. In this manner, it effectively integrates free navigation and track navigation, thereby improving the flexibility of obstacle avoidance and ensuring the safety of obstacle avoidance of the robot.Type: GrantFiled: January 5, 2020Date of Patent: November 2, 2021Assignee: UBTECH ROBOTICS CORP LTDInventors: Hongjian Liu, Zhichao Liu, Jian Zhang, Simin Zhang, Yun Zhao, Youjun Xiong, Jianxin Pang
-
Patent number: 11160319Abstract: A smart mask includes a first material layer, at least one display, a first sensor, and a control module. The first material layer is configured to cover a portion of a face of a person. The at least one display is connected to the first material layer and configured to display images over a mouth of the person. The first sensor is configured to detect movement of the mouth of the person and generate a signal indicative of the movement of the mouth. The control module is configured to receive the signal and display the images on the display based on the movement of the mouth.Type: GrantFiled: August 11, 2020Date of Patent: November 2, 2021Assignee: NANTWORKS, LLCInventors: Nicholas James Witchey, Patrick Soon-Shiong
-
Patent number: 11157137Abstract: Systems, methods, and computer program products for providing a dynamic interactive seat map are disclosed. A computer-implemented method may include receiving a base map illustrating locations of sections within an event venue, receiving polygon coordinates for a section depicted in the base map, determining a plurality of characteristics comprising a fill color, a stroke color, and a transparency for the section, and displaying an interactive seat map having the determined characteristics applied to the section of the base map.Type: GrantFiled: May 19, 2016Date of Patent: October 26, 2021Assignee: StubHub, Inc.Inventor: Benjamin Salles
-
Patent number: 11158108Abstract: Systems and methods for providing a mixed-reality pass-through experience include implement acts of obtaining a texture map of a real-world environment, obtaining a depth map of the real-world environment, obtaining an updated texture map of the real-world environment subsequent to the obtaining of the depth map and the texture map, and rendering a virtual representation of the real-world environment utilizing both the depth map and the updated texture map that was obtained subsequent to the depth map. The texture map and the depth map may be based on a same image pair obtained from a pair of stereo cameras, the depth map being obtained by performing stereo matching on the same image pair. Additionally, the acts may further include detecting a predicted pose of a user and reprojecting a portion of the depth map to conform to a user perspective associated with the predicted pose.Type: GrantFiled: December 4, 2019Date of Patent: October 26, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Michael Bleyer, Christopher Douglas Edmonds, Donald John Patrick O'Neil, Raymond Kirk Price
-
Patent number: 11157745Abstract: Automated discovery of the relative positioning of a network of cameras that view a physical environment. The automated discovery is based on comparing TimeLines for the cameras. The TimeLines are time-stamped data relating to the camera's view, for example a sequence of time stamps and corresponding images captured by a camera at those time stamps. In one approach, the relative positioning is represented by a proximity graph of nodes connected by edges. The nodes represent spaces in the physical environment, and each edge between two nodes represents a pathway between the spaces represented by the two nodes.Type: GrantFiled: February 20, 2018Date of Patent: October 26, 2021Assignee: Scenera, Inc.Inventors: David D. Lee, Andrew Augustine Wajs, Seungoh Ryu, Chien Lim
-
Patent number: 11151794Abstract: A method of generating an augmented reality lens comprises: causing to display a list of lens categories on a display screen of a client device; receiving a user choice from the displayed list; causing to prepopulate a lens features display on the display device based on the user choice, wherein each lens feature comprises image transformation data configured to modify or overlay video or image data; receiving a user selection of a lens feature from the prepopulated lens display; receiving a trigger selection that activates the lens feature to complete the lens; and saving the completed lens to a memory of a computer device.Type: GrantFiled: August 16, 2019Date of Patent: October 19, 2021Assignee: Snap Inc.Inventors: Oleksandr Chepizhenko, Jean Luo, Bogdan Maksymchuk, Vincent Sung, Ashley Michelle Wayne
-
Patent number: 11148063Abstract: The present specification describes systems and methods that enable non-players to participate as spectators in online video games and, through a collective voting mechanism, determine the occurrence of certain events or contents of the gameplay in real time. Game event options are generated and presented to non-players. A specific one of the game event options is then selected based on a collective vote of the non-players. Once selected, the specific one or more of the game event options are then generated as actual gaming events and incorporated into a video game stream that is transmitted to the players as part of the gameplay session. In this manner, non-players may be able to directly affect the course of gameplay.Type: GrantFiled: February 11, 2020Date of Patent: October 19, 2021Assignee: Activision Publishing, Inc.Inventors: Josiah Eatedali, Jon Estanislao, Etienne Pouliot, Dave Bergeron, Maxime Babin, Mario Beckman Notaro
-
Patent number: 11151979Abstract: A method and apparatus include receiving a text input that includes a sequence of text components. Respective temporal durations of the text components are determined using a duration model. A spectrogram frame is generated based on the duration model. An audio waveform is generated based on the spectrogram frame. Video information is generated based on the audio waveform. The audio waveform is provided as an output along with a corresponding video.Type: GrantFiled: August 23, 2019Date of Patent: October 19, 2021Assignee: TENCENT AMERICA LLCInventors: Heng Lu, Chengzhu Yu, Dong Yu
-
Patent number: 11151768Abstract: There is provided an information processing apparatus including an operation unit acquiring an input operation for a message composed of at least one of text and an image, a recording control unit recording the message in accordance with the acquired input operation, and a reproduction control unit reproducing the recorded message to display the message on a display unit.Type: GrantFiled: April 30, 2020Date of Patent: October 19, 2021Assignee: SONY CORPORATIONInventors: Takurou Noda, Yasushi Okumura
-
Patent number: 11150793Abstract: A method and system for indicating a priority of a first linked node and a second linked node within a plurality of linked nodes associated with an electronically interactive social relations service. The method may include assigning a weight to the first linked node within the plurality of linked nodes. The method may also include assigning a weight to the second linked node within the plurality of linked nodes. Additionally, the method may include determining an adjustment of the weight of the first linked node within the plurality of linked nodes. The method may further include adjusting the weight of the second linked node based on the determining of the adjustment of the weight of the first linked node, the adjusting of the weight of the second linked node corresponding to a link strength of the second link node relative to the first link node.Type: GrantFiled: April 30, 2019Date of Patent: October 19, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Elizabeth M Daly, Michael Muller
-
Patent number: 11145122Abstract: The various embodiments of the disclosure disclose a system and method for enhancing augmented reality experience on one or more connected user equipment (UEs) using in-device contents. The method comprises of performing, by a connected User Equipment (UE), an automatic registration of one or more model objects, performing, by the connected UE, at least one of a user interest based content analysis, semantic based content analysis, and context based content analysis of the in-device contents, identifying the one or more registered model objects to associate with the analyzed content, and associating, by the connected UE, the in-device contents with the one or more registered model objects to enhance the augmented reality experience with the in-device contents.Type: GrantFiled: March 9, 2018Date of Patent: October 12, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Dipin Kollencheri Puthenveettil
-
Patent number: 11138207Abstract: Various embodiments relate generally to a system, a device and a method for expression-based retrieval of expressive media content. A request may be received to search for content items in a media content management system. Media content items may be procured from different content sources through application programming interfaces, user devices, and/or web servers. Media content items may be analyzed to determine one or more metadata attributes, including an expressions. Metadata attributes may be stored as one or more content associations. The media content items may be stored and categorized based on the content associations. A search router rules engine may determine search intent based on the search query, which may include a pictorial representation of an expression, such as an emoji. A dynamic interface may be integrated in a device operating system through various access points, including a button, a trigger key, a keyword trigger, and an overlay button.Type: GrantFiled: February 14, 2016Date of Patent: October 5, 2021Assignee: Google LLCInventors: David McIntosh, Erick Hachenburg, Bryan Hart, Kyler Blue, Jeff Sinckler, Steven Dobek
-
Patent number: 11134908Abstract: Technologies for determining the spatial orientation of input imagery to produce a three-dimensional model includes a device having circuitry to obtain two-dimensional images of an anatomical object (e.g., a bone of a human joint), to determine candidate values indicative of translation and rotation of the anatomical object in the two-dimensional images, and to produce, as a function of the obtained two-dimensional images and the candidate values, a candidate three-dimensional model of the anatomical object. The circuitry is also to determine a score indicative of an accuracy of the candidate three-dimensional model, to determine whether the score satisfies a threshold, and to produce, in response to a determination that the score satisfies the threshold, data indicating that the candidate three-dimensional model is an accurate representation of the anatomical object.Type: GrantFiled: September 27, 2019Date of Patent: October 5, 2021Assignee: DePuy Synthes Products, Inc.Inventors: Shawnoah S. Pollock, R. Patrick Courtis
-
Patent number: 11132913Abstract: Systems and methods are provided for acquiring physical-world data indicative of interactions of a subject with an avatar for evaluation. An interactive avatar is provided for interaction with the subject. Speech from the subject to the avatar is captured, and automatic speech recognition is performed to determine content of the subject speech. Motion data from the subject interacting with the avatar is captured. A next action of the interactive avatar is determined based on the content of the subject speech or the motion data. The next action of the avatar is implemented, and a score for the subject is determined based on the content of the subject speech and the motion data.Type: GrantFiled: April 20, 2016Date of Patent: September 28, 2021Assignee: Educational Testing ServiceInventors: Vikram Ramanarayanan, Mark Katz, Eric Steinhauer, Ravindran Ramaswamy, David Suendermann-Oeft
-
Patent number: 11132606Abstract: A method for training an animation character, including mapping first animation data defining a first motion sequence to a first subset of bones of a trained character, and mapping second animation data defining a second motion sequence to a second subset of bones. A bone hierarchy includes the first subset of bones and second subset of bones. Reinforcement learning is applied iteratively for training the first subset of bones using the first animation data and for training the second subset of bones using the second animation data. Training of each subset of bones is performed concurrently at each iteration. Training includes adjusting orientations of bones. The first subset of bones is composited with the second subset of bones at each iteration by applying physics parameters of a simulation environment to the adjusted orientations of bones in the first and second subset of bones.Type: GrantFiled: March 15, 2019Date of Patent: September 28, 2021Assignee: Sony Interactive Entertainment Inc.Inventor: Michael Taylor
-
Patent number: 11133005Abstract: Systems and methods are described herein for disambiguating a voice search query that contains a command keyword by determining whether the user spoke a quotation from a content item and whether the user mimicked or approximated the way the quotation is spoken in the content item. The voice search query is transcribed into a string, and an audio signature of the voice search query is identified. Metadata of a quotation matching the string is retrieved from a database that includes audio signature information for the string as spoken within the content item. The audio signature of the voice search query is compared with the audio signature information in the metadata to determine whether the audio signature matches the audio signature information in the quotation metadata. If a match is detected, then a search result comprising an identifier of the content item from which the quotation comes is generated.Type: GrantFiled: April 29, 2019Date of Patent: September 28, 2021Assignee: Rovi Guides, Inc.Inventors: Ankur Aher, Sindhuja Chonat Sri, Aman Puniyani, Nishchit Mahajan
-
Patent number: 11134115Abstract: Embodiments of the invention provide for live encoding systems that can replicate a current encoded frame instead of re-encoding said current frame, and then adjust the replicated frame to different bitrates, resolutions, and/or contexts as necessary for the several different adaptive bitrate streams. In addition, various embodiments of the invention can extend a duration of a current frame being repackaged and/or re-encoded. Utilizing these and other techniques, live encoding systems in accordance with embodiments of the invention can more efficiently handle gaps in received data, slower feeding of data, and/or heavy loads on server hardware.Type: GrantFiled: July 10, 2020Date of Patent: September 28, 2021Assignee: DIVX, LLCInventors: Yuri Bulava, Pavel Potapov
-
Patent number: 11127375Abstract: The embodiments described herein provide devices and methods for image processing. Specifically, the embodiments described herein provide techniques for blending graphical layers together into an image for display. In general, these techniques utilize multiple display control units to blend together more layers than could be achieved using a single display control unit. This blending of additional layers can provide improved image quality compared to traditional techniques that use only the blending capability of a single display control unit.Type: GrantFiled: November 17, 2015Date of Patent: September 21, 2021Assignee: NXP USA, INC.Inventors: Cristian Corneliu Tomescu, Dragos Papava
-
Patent number: 11123636Abstract: A method of runtime animation substitution may include detecting, by a processing device of a video game console, an interaction scenario in an instance of an interactive video game, wherein the interaction scenario comprises a target animation associated with a game character. The method may further include identifying, by the processing device, a valid transitional animation. The method may further include causing, by the processing device, the valid transitional animation to be performed by the game character in the instance of the interactive video game.Type: GrantFiled: June 7, 2019Date of Patent: September 21, 2021Assignee: Electronic Arts Inc.Inventors: Simon Sherr, Brett Peake
-
Patent number: 11127225Abstract: A method of fitting a three dimensional (3D) model to input data is described. Input data comprises a 3D scan and associated appearance information. The 3D scan depicts a composite object having elements from at least two classes. A texture model is available which, given an input vector, computes, for each of the classes, a texture and a mask. A joint optimization is computed to find values of the input vector and values of parameters of the 3D model, where the optimization enforces that the 3D model, instantiated by the values of the parameters, gives a simulated texture which agrees with the input data in a region specified by the mask associated with the 3D model; such that the 3D model is fitted to the input data.Type: GrantFiled: June 1, 2020Date of Patent: September 21, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Marek Adam Kowalski, Virginia Estellers Casas, Thomas Joseph Cashman, Charles Thomas Hewitt, Matthew Alastair Johnson, Tadas Baltru{hacek over (s)}aitis
-
Patent number: 11120599Abstract: A method, system, and computer program product for detecting, by measuring a signal indicative of a movement of a facial muscle, a motion pattern; deriving, from the motion pattern, a derived motion pattern, wherein the motion pattern and the derived motion pattern each corresponds to different emotional responses of a class of emotional responses; creating an emotional model for the class of emotional responses based on the motion pattern and the derived motion pattern; and reconfiguring the derived motion pattern to a new motion pattern by (i) comparing the new motion pattern and the derived motion pattern and (ii) associating, based on the comparison, the new motion pattern with the class of emotional responses, wherein the derived motion pattern and the new motion pattern fail to be detected via muscle movement prior to the motion pattern.Type: GrantFiled: November 8, 2018Date of Patent: September 14, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sarbajit K. Rakshit, Martin G. Keen, John M. Ganci, Jr., James E. Bostick
-
Patent number: 11120897Abstract: The present system is configured to track informal observations by multiple caregivers about a care recipient and provide actionable feedback to the multiple caregivers for managing health of the care recipient based on the informal observations. Informal caregivers are constantly observing the health and/or wellness of care recipients they provide care for. Within families for example, multiple informal caregivers coordinate the care they provide for a care recipient amongst each other to balance the workload. These caregivers observe the same care recipient often on different occasions, from different perspectives, and with varying levels of subjectivity. Keeping an eye out for abnormal behavior by the care recipient, changes in capabilities of the care recipient, and/or potential disease progression, for example, are pieces of data caregivers commonly observe in an informal, rarely structured way.Type: GrantFiled: May 11, 2017Date of Patent: September 14, 2021Assignee: Lifeline Systems CompanyInventors: Portia E. Singh, Mladen Milosevic
-
Patent number: 11112934Abstract: Methods, systems, computer-readable media, and apparatuses for generating an Augmented Reality (AR) object are presented. The method may include capturing an image of one or more target objects, wherein the one or more target objects are positioned on a pre-defined background. The method may also include segmenting the image into one or more areas corresponding to the one or more target objects and one or more areas corresponding to the pre-defined background. The method may additionally include converting the one or more areas corresponding to the one or more target objects to a digital image. The method may further include generating one or more AR objects corresponding to the one or more target objects, based at least in part on the digital image.Type: GrantFiled: November 18, 2019Date of Patent: September 7, 2021Assignee: QUALCOMM IncorporatedInventors: Raphael Grasset, Hartmut Seichter
-
Patent number: 11113859Abstract: Disclosed herein includes a system, a method, and a non-transitory computer readable medium for rendering a three-dimensional (3D) model of an avatar according to an audio stream including a vocal output of a person and image data capturing a face of the person. In one aspect, phonemes of the vocal output are predicted according to the audio stream, and the predicted phonemes of the vocal output are translated into visemes. In one aspect, a plurality of blendshapes and corresponding weights are determined, according to the corresponding image data of the face, to form the 3D model of the avatar of the person. The visemes may be combined with the 3D model of the avatar to form a 3D representation of the avatar, by synchronizing the visemes with the 3D model of the avatar in time.Type: GrantFiled: July 10, 2019Date of Patent: September 7, 2021Assignee: Facebook Technologies, LLCInventors: Tong Xiao, Sidi Fu, Mengqian Liu, Peihong Guo, Shu Liang, Evgeny Zatepyakin
-
Patent number: 11113585Abstract: Aspects of the disclosure generally relate to computing devices and/or systems, and may be generally directed to devices, systems, methods, and/or applications for learning operation of an application or an object of an application in various visual surroundings, storing this knowledge in a knowledgebase (i.e. neural network, graph, sequences, etc.), and enabling autonomous operation of the application or the object of the application.Type: GrantFiled: January 1, 2019Date of Patent: September 7, 2021Inventor: Jasmin Cosic
-
Patent number: 11107241Abstract: A non-transitory computer readable medium embodies instructions that cause one or more processors to perform a method. The method includes selecting a 3D model corresponding to an object. The method further includes generating domain-adapted images of the 3D model, the domain-adapted images representing the 3D model at corresponding poses. The method further includes acquiring 2D projections of 3D points on a 3D bounding box defined around the 3D model at the corresponding poses. The method further includes training an algorithm model to learn correspondences between the generated images and the corresponding 2D projections. The method further includes storing, in a memory, parameters representing the algorithm model.Type: GrantFiled: December 11, 2019Date of Patent: August 31, 2021Assignee: SEIKO EPSON CORPORATIONInventors: Dibyendu Mukherjee, Bowen Chen, Juhan Bae
-
Patent number: 11094101Abstract: A system provides the ability to import large engineering 3D models from a primary 3D rendering software into a secondary 3D rendering software that does not have the tools of the resources to render the larger 3D model on its own. The system uses a plugin to combine 3D data from the two software sources, and then return the combined 3D data to the secondary 3D rendering software. Components of the system can be remote or cloud based, and the system facilitates video streaming of 3D rendered models that can be manipulated on any computer capable of supporting a video stream.Type: GrantFiled: September 10, 2019Date of Patent: August 17, 2021Assignee: AVEVA Software, LLCInventors: David Matthew Stevenson, Chase Laurendine, Paul Antony Burton
-
Patent number: 11094216Abstract: A musical system may include one or more storage media, at least one processor, one or more sensors corresponding to keys of a musical instrument, and a display. The storage media may be configured to store a set of instructions for modifying a music score for a user based on a performance level of the user. The at least one processor may be configured to communicate with the one or more storage media. When executing the set of instructions, the processor is directed to determine a user performance level of a user and provide a modified music score for the user based on the user performance level. The sensors may be configured to sense a motion of at least one key of the keys and generate a key signal accordingly. The display may be configured to display the modified music score.Type: GrantFiled: July 16, 2019Date of Patent: August 17, 2021Assignee: SUNLAND INFORMATION TECHNOLOGY CO., LTD.Inventor: Bin Yan
-
Patent number: 11093195Abstract: Embodiments of the present disclosure provide a method, device and computer program product for updating a user interface. According to example implementations of the present disclosure, an element sequence including a plurality of elements in the user interface is obtained, each element in the element sequence being associated with each of a plurality of actions being performed by a user in the user interface, the plurality of elements in the element sequence being sorted in an order of the plurality of actions being performed by the user; a natural language processing model is trained using the element sequence, the natural language processing model being used for modeling and feature-learning of a natural language; and the user interface is enabled to be updated based on the trained natural language processing model. Therefore, software developers can have deeper insight into users' needs and develop a more user-friendly user interface.Type: GrantFiled: June 14, 2019Date of Patent: August 17, 2021Assignee: EMC IP HOLDING COMPANY LLCInventors: Felix Peng, Zhongyi Zhou
-
Patent number: 11093815Abstract: An embodiment is an avatar or avatar environment to visualize data within an athletic performance system or service and/or a social network system or service, for example as part of the Internet. The avatar may further evolve or alter its appearance, animation, or other visual or audio characteristics in response to the data or other input. In particular, the avatar of an embodiment may respond to and provide visualization of athletic or sport performance data. According to one or more aspects, an avatar may be placed on other network sites and updated based on athletic performance data. The avatar may be awarded for goals achieved by a user. The awards or gifts may further include non-avatar related items such as apparel, gift cards and the like.Type: GrantFiled: September 5, 2018Date of Patent: August 17, 2021Assignee: NIKE, Inc.Inventors: Jason Nims, Roberto Tagliabue, Danielle Quatrochi
-
Patent number: 11094099Abstract: Systems and methods are described for applying a unifying visual effect, such as posterization, to all or most of the visual elements in a film. In one implementation, a posterization standard includes a line work standard, a color palette, a plurality of color blocks characterized by one or more hard edges, and a gradient transition associated with each of the hard edges. The visual elements, including live actors and set pieces, are prepared in accordance with the posterization standard. The actors are filmed performing live among the set pieces. The live-action segments can be composited with digital elements. The result is a combination of both real and stylized elements, captured simultaneously, to produce an enhanced hybrid of live action and animation.Type: GrantFiled: November 8, 2019Date of Patent: August 17, 2021Assignee: Trioscope Studios, LLCInventors: Grzegorz Jonkajtys, L. Chad Crowley
-
Patent number: 11094100Abstract: An online system presents a content item to users and receives selections of reaction icons from the users. The online system generates a background animation with the selected reaction icons and a foreground animation to be layered on top of the background animation. The online system sends the background and foreground animations to a client device to be cached. Further, the online system presents the content item to a viewing user associated with the client device and receives a selection of a reaction icon from the viewing user. The online system selects a subset of the users based on the viewing user's affinity to the users, retrieves images of the selected users, and send the images to the client device. The client device customizes the background and foreground animations based on the images and the viewing user's reaction icon to generate a compound animation for display to the viewing user.Type: GrantFiled: April 3, 2020Date of Patent: August 17, 2021Assignee: Facebook, Inc.Inventors: Robert Benson Walton, Zachary W. Stubenvoll, Julia Harter Toffey, Skyler Bock, Silvia Chyou, Jordan Richard Honnette, Wei-Sheng Su, Jerod Wanner, Stefan Casey Parker, Renyu Liu, Rajat Bhardwaj
-
Patent number: 11087520Abstract: An avatar facial expression generating system and a method of avatar facial expression generation are provided. In the method, user data relating to sensing result of a user is obtained. A first and a second emotional configurations are determined. The first and second emotional configuration maintain during a first and a second duration, respectively. A transition emotional configuration is determined based on the first emotional configuration and the second emotional configuration, in which the transition emotional configuration maintains during a third duration. Facial expressions of an avatar are generated based on the first emotional configuration, the transition emotional configuration, and the second emotional configuration, respectively. The third duration exists between the first duration and the second duration. Accordingly, a normal facial expression on an avatar would be presented while encountering the emotion transformation.Type: GrantFiled: October 17, 2019Date of Patent: August 10, 2021Assignee: XRSPACE CO., LTD.Inventors: Wei-Zhe Hong, Ming-Yang Kung, Ting-Chieh Lin, Feng-Seng Chu
-
Patent number: 11087020Abstract: Examples described herein include systems and methods for providing privacy information to a user of a user device. An example method can include detecting, at a management server, access of the private data by an entity other than the user, such as an administrator who is authorized to access the management server. The method further includes generating an event reflecting the access of the private data. The generated event can be stored as part of an event log in a database. The method further includes providing the event to the user device for display to the user. The event displayed on the user device can include information such as an identity of the accessing entity, a description of the private data that was accessed, and when the access occurred. The user can select a displayed event at the user device and request further information on the event from an administrator.Type: GrantFiled: November 28, 2018Date of Patent: August 10, 2021Assignee: VMWare, Inc.Inventors: Ramana Malladi, Achyut Bukkapattanam, Chris Wigley, Nidhi Aggarwal, Sai Kiran Vudutala
-
Patent number: 11086586Abstract: A method for the generation and selective display of musical information on one or more devices capable of displaying musical information can include generating a plurality of visual blocks, each block among the plurality having a first dimension and a second dimension corresponding to musical information visible with each block. The method can include selectively displaying, via a first GUI and/or a second GUI, particular blocks among the plurality of visual blocks. The musical information to contained in a quantity of the respective subsets of the particular blocks displayed on the second GUI can be include at least a portion of respective subsets of the particular blocks displayed on the first GUI.Type: GrantFiled: September 15, 2020Date of Patent: August 10, 2021Assignee: Auryn, LLCInventor: Jeffrey R. Bernett
-
Patent number: 11087514Abstract: Techniques for automatically synchronizing poses of objects in an image or between multiple images. An automatic pose synchronization functionality is provided by an image editor. The image editor identifies or enables a user to select objects (e.g., people) whose poses are to be synchronized and the image editor then performs processing to automatically synchronize the poses of the identified objects. For two objects whose poses are to be synchronized, a reference object is identified as one whose associated pose is to be used as a reference pose. A target object is identified as one whose associated pose is to be modified to match the reference pose of the reference object. An output image is generated by the image editor in which the position of a part of the target object is modified such that the pose associated with the target object matches the reference pose of the reference object.Type: GrantFiled: June 11, 2019Date of Patent: August 10, 2021Assignee: Adobe Inc.Inventors: Sankalp Shukla, Sourabh Gupta, Angad Kumar Gupta
-
Patent number: 11087517Abstract: In particular embodiments, a 2D representation of an object may be provided. A first method may comprise: receiving sketch input identifying a target position for a specified portion of the object; computing a deformation for the object within the context of a character rig specification for the object; and displaying an updated version of the object. A second method may comprise detecting sketch input; classifying the sketch input, based on the 2D representation, as an instantiation of the object; instantiating the object using a 3D model of the object; and displaying a 3D visual representation of the object.Type: GrantFiled: June 2, 2016Date of Patent: August 10, 2021Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Robert Walker Sumner, Maurizio Nitti, Stelian Coros, Bernhard Thomaszewski, Fabian Andreas Hahn, Markus Gross, Frederik Rudolf Mutzel
-
Patent number: 11082380Abstract: Systems and methods presented herein provide an in-application messaging system that includes an audio generation system configured to encode at least one audio watermark within an audio signal, and to broadcast the audio signal into a physical environment. The in-application messaging system also includes a portable electronic device configured to receive the audio signal from the physical environment, to identify the at least one audio watermark encoded within the audio signal, and to display in-application messaging information via an application running on the portable electronic device. The in-application messaging information is based at least in part on the at least one audio watermark.Type: GrantFiled: August 13, 2019Date of Patent: August 3, 2021Assignee: Universal City Studios LLCInventors: Nicholas Anthony Linguanti, Kimberly Anne Humphreys, Humberto Augusto Kam
-
Patent number: 11080918Abstract: There is provided a computer implemented method for predicting garment or accessory attributes using deep learning techniques, comprising the steps of: (i) receiving and storing one or more digital image datasets including images of garments or accessories; (ii) training a deep model for garment or accessory attribute identification, using the stored one or more digital image datasets, by configuring a deep neural network model to predict (a) multiple-class discrete attributes; (b) binary discrete attributes, and (c) continuous attributes, (iii) receiving one or more digital images of a garment or an accessory, and (iv) extracting attributes of the garment or the accessory from the one or more received digital images using the trained deep model for garment or accessory attribute identification. A related system is also provided.Type: GrantFiled: May 25, 2017Date of Patent: August 3, 2021Assignee: METAIL LIMITEDInventors: Yu Chen, Sukrit Shankar, Jim Downing, Joe Townsend, Duncan Robertson, Tom Adeyoola
-
Patent number: 11080377Abstract: The invention provides a system and method for virtual world biometric analytics through the use of a multimodal biometric analytic wallet. The system includes a virtual biometric wallet comprising a pervasive repository for storing biometric data, the pervasive repository including at least one of a biometric layer, a genomic layer, a health layer, a privacy layer, and a processing layer. The virtual biometric wallet further comprises an analytic environment configured to combine the biometric data from at least one of the biometric layer, the genomic layer, the health layer, the privacy layer, and the processing layer. The virtual biometric wallet also comprises a biometric analytic interface configured to communicate the biometric data to one or more devices within a virtual universe.Type: GrantFiled: November 30, 2017Date of Patent: August 3, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Aaron K. Baughman, Christopher J. Dawson, Barry M. Graham, David J. Kamalsky
-
Patent number: 11076469Abstract: The present invention is directed to a user-operated spotlight system and method for lighting a performer on a stage or performance space; the user-operated spotlight system comprising a screen which displays an image of the stage and a cursor, a screen cursor positioner adapted to be operated to move the cursor on the screen, a processor connected to the screen, and, a plurality of controllable spotlights which are connected to the processor and which plurality of controllable spotlights can be moved by a user moving the cursor on the screen. The advantage of providing such a user-operated spotlight system is that a single user can operate a plurality of spotlights.Type: GrantFiled: February 22, 2020Date of Patent: July 27, 2021Inventor: Liam Feeney
-
Patent number: 11074739Abstract: Methods, devices, media, and other embodiments are described for managing and configuring a pseudorandom animation system and associated computer animation models. One embodiment involves generating image modification data with a computer animation model configured to modify frames of a video image to insert and animate the computer animation model within the frames of the video image, where the computer animation model of the image modification data comprises one or more control points. Motion patterns and speed harmonics are automatically associated with the control points, and motion states are generated based on the associated motions and harmonics. A probability value is then assigned to each motion state. The motion state probabilities can then be used when generating a pseudorandom animation.Type: GrantFiled: September 30, 2019Date of Patent: July 27, 2021Assignee: Snap Inc.Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
-
Patent number: 11073908Abstract: A method for mapping an input device to a virtual object in virtual space displayed on a display device is disclosed. The method may include determining, via an eye tracking device, a gaze direction of a user. The method may also include, based at least in part on the gaze direction being directed to a virtual object in virtual space displayed on a display device, modifying an action to be taken by one or more processors in response to receiving a first input from an input device. The method may further include, thereafter, in response to receiving the input from the input device, causing the action to occur, wherein the action correlates the first input to an interaction with the virtual object.Type: GrantFiled: February 20, 2020Date of Patent: July 27, 2021Assignee: Tobii ABInventors: Simon Gustafsson, Alexey Bezugly, Anders Kingbäck, Anders Clausen
-
Patent number: 11065540Abstract: According to one aspect of the present invention, the game execution device includes a receiver, an event controller, a selector, and a generator. The receiver receives a request for generating an event based on a request for executing intervention from an auxiliary server, which receives the request for executing intervention to the game issued by a terminal of audience of a real-time play video of the game. The event controller controls events that occur in the game, at least in response to requests for generating events. The selector selects events subject to notification up to a predetermined number of the first events that occur in the game during the first period. The generator generates notification data to inform a game player of the occurrence of events subject to notification.Type: GrantFiled: February 26, 2019Date of Patent: July 20, 2021Assignee: DWANGO, Co., Ltd.Inventor: Yuji Chino
-
Patent number: 11069114Abstract: An in-vehicle avatar processing apparatus and a method of controlling the same, for generating and outputting an avatar in consideration of a driving situation as well as an external appearance of a vehicle, may include generating first avatar data including at least a vehicle external image by a transmission side device included in a first vehicle, generating second avatar data based on information pertaining to an inside or an outside of a vehicle, by the transmission side device, generating avatar animation by combining the first avatar data and the second avatar data, and outputting the generated avatar animation through an output unit of a reception side device.Type: GrantFiled: December 2, 2019Date of Patent: July 20, 2021Assignees: Hyundai Motor Company, Kia CorporationInventor: Jeong Seok Han
-
Patent number: 11069113Abstract: A method for creating a computer simulation of a crowd by animating a plurality of virtual actors simultaneously with each virtual actor set to a different setting of parameters and therefore expressing a unique individual body posture. An apparatus for creating a computer simulation of an actor by depicting the actor based on a plurality of postural parameters of the actor. A method for creating a computer simulation of an actor by having the actor move subject to a first idiomatic behavior for a first set of a plurality of successive key frames, and transitioning the actor to move to a second idiomatic behavior for a second set of a plurality of successive key frames.Type: GrantFiled: May 17, 2018Date of Patent: July 20, 2021Inventor: Kenneth Perlin
-
Patent number: 11068156Abstract: Embodiments of the present application relate to a method, device, and system for processing data. The method includes displaying, by one or more processors associated with a terminal, a system control, the system control comprising one or more interface elements associated with a current context of the terminal, and a screen of the terminal displays the system control and at least part of an interface for a current application; receiving, by the one or more processors associated with the terminal, an input to the system control; determining an operating instruction based at least in part on the input to the system control; and performing, by the one or more processors associated with the terminal, one or more operations based at least in part on the operating instruction.Type: GrantFiled: June 5, 2018Date of Patent: July 20, 2021Assignee: BANMA ZHIXING NETWORK (HONGKONG) CO., LIMITEDInventors: Ying Xian, Huan Zeng, Kezheng Liao
-
Patent number: 11069135Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes receiving a plate with an image of the subject's facial expression, a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters, a model of a camera rig used to capture the plate, and a virtual lighting model that estimates lighting conditions when the image on the plate was captured.Type: GrantFiled: November 12, 2019Date of Patent: July 20, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Stéphane Grabli, Michael Bao, Per Karefelt, Adam Ferrall-Nunge, Jeffery Yost, Ronald Fedkiw, Cary Phillips, Pablo Helman, Leandro Estebecorena
-
Patent number: 11062264Abstract: A work support system that is suitable for reducing communication loads and processing loads, and improving certainty and versatility, is provided. An AI manual server 100 generates a rule describing a determination condition of a work situation based on a manual, and transmits the rule to a smart device 300. A work situation determination apparatus 220 comprises a storage section that stores work situation information indicating a work situation in association with equipment signal information. The work situation determination apparatus 220 inputs an equipment signal from a PLC of object equipment 210, reads out the work situation information corresponding to the input equipment signal from the storage section, and transmits the readout work situation information to the smart device 300. The smart device 300 receives the rule, and stores this in the storage section 58.Type: GrantFiled: March 8, 2019Date of Patent: July 13, 2021Assignee: GRACE TECHNOLOGY, INC.Inventor: Yukiharu Matsumura