Abstract: A system and a method for analyzing files using visual cues in the presentation of the file is provided. These visual aids may be extracted using a convolutional neural network, classified, and used in conjunction with file metadata to determine if a provided document is likely to be malicious. This methodology may be extended to detect a variety of social engineering-related attacks including phishing sites or malicious emails. A method for analyzing a received file to determine if the received file comprises malicious code begins with generating an image that would be displayed if the received file is opened by the native software program. Then the image is analyzed, and output is generated. Metadata is also extracted from the received file. Then, a maliciousness score is generated based on the output, the metadata, and a reference dataset.
Abstract: This application discloses an information interaction method and apparatus, a storage medium, and an electronic apparatus. The method includes obtaining a first target operation instruction in a VR scenario; selecting and displaying a first virtual operation panel corresponding to the first target operation instruction from a plurality of virtual operation panels in the VR scenario, wherein the plurality of virtual operation panels are displayed mutually independently, and are respectively configured to display different interactive objects. The method may further include obtaining an interactive operation instruction generated by an interactive operation performed by an interactive device on a target interactive object displayed in the first virtual operation panel, the interactive device being associated with the VR scenario; and in response executing a target event corresponding to the target interactive object in the VR scenario.
Type:
Grant
Filed:
December 9, 2021
Date of Patent:
November 7, 2023
Assignee:
Tencent Technology (Shenzhen) Company Limited
Abstract: This application provides a touch control method and an apparatus, and relates to the field of communications technologies. The method includes: (S601) obtaining, by a terminal, a first touch operation entered by a user on a touchscreen; (S602, S603, and S604) and mapping, by the terminal when the first touch operation is performed on a first preset area in a target interface, the first touch operation to a second touch operation, so that a target application responds to the second touch operation, where the target interface is any interface that is presented by the target application and that covers the first preset area, and the target application is running in the foreground.
Abstract: Methods, systems, and computer-readable media for determining the efficacy of an advertisement are described herein. A computing device may receive an advertisement from an advertisement server. The computing device may determine advertisement information associated with the presentation of the advertisement. The advertisement information may be sent to the advertisement server.
Abstract: Techniques for training an artificial intelligence (AI)/machine learning (ML) model to recognize applications, screens, and UI elements using computer vision (CV) and to recognize user interactions with the applications, screens, and UI elements. Optical character recognition (OCR) may also be used to assist in training the AI/ML model. Training of the AI/ML model may be performed without other system inputs such as system-level information (e.g., key presses, mouse clicks, locations, operating system operations, etc.) or application-level information (e.g., information from an application programming interface (API) from a software application executing on a computing system), or the training of the AI/ML model may be supplemented by other information, such as browser history, heat maps, file information, currently running applications and locations, system level and/or application-level information, etc.
Abstract: A method for displaying a covering on a panoramic map image includes: acquiring covering information to be displayed, in which the covering information includes a plurality of coverings; acquiring a set of vertices of a maximum circumscribed polyhedron for each covering in a three-dimensional coordinate system of a panoramic scene; acquiring a mapped set of vertices of the maximum circumscribed polyhedron of each covering by mapping the set of vertices to a sphere; determining an overlapping result among the plurality of coverings based on the mapped set of vertices of each covering; and determining a target covering from the plurality of coverings based on the overlapping result, and displaying the target covering on the panoramic map image.
Abstract: A method for controlling vehicle operations based on user orientation and user interaction data is provided. The method includes detecting, using a sensor operating in conjunction with the computing device of the vehicle, an orientation of a part of a user relative to a location on a display that is positioned in an interior of the vehicle, detecting, using an additional sensor, an interaction between the user and a portion of the display positioned in the interior of the vehicle, determining, using the computing device, whether a distance between the location and the portion of the display satisfies a threshold, and controlling, by the computing device, an operation associated with the vehicle responsive to determining that the distance between the location and the portion of the display satisfies the threshold.
Type:
Grant
Filed:
September 9, 2021
Date of Patent:
September 26, 2023
Assignee:
TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
Abstract: A device is disclosed. The device includes a display, a memory configured to store an artificial intelligence model trained to obtain an output layout information of an additional information provided in the device, and a processor connected to the display and the memory. The processor is configured to control the device, and obtain, based on an output layout of the main information provided in the device being selected, an output layout information of the additional information by inputting information related to an output layout of a main information to the artificial intelligence model. The processor is also configured to control the display to output a user interface (UI) screen including time information and additional information based on an output layout of the main information and an output layout information of the additional information.
Abstract: Systems and methods for facilitating user interaction with a simulated object that is associated with a physical location in the real world environment is herein disclosed. In one aspect, embodiments of the present disclosure include a method, which may be implemented on a system, of identifying the simulated object that is available for access based on location data. The location data can include a location of a device in a time period, the device for use by a user to access the simulated object. One embodiment includes, verifying an identity of the user; and in response to determining that the user is authorized to access the simulated object, providing the simulated object for presentation to the user via the device.
Abstract: A computer-implemented method for generating a color of an object displayed on a GUI. The method includes displaying on a graphical user interface a set of icons, each icon of the set being associated with a color, detecting a first user interaction on a first icon of the set, detecting a second user interaction that comprises at least a slide, modifying a value of a parameter of a first color associated with the first icon, the modification of the value being performed with the second user interaction, and computing a first new color that is the first color with the modified value of a parameter.
Type:
Grant
Filed:
May 13, 2022
Date of Patent:
August 22, 2023
Assignee:
DASSAULT SYSTEMES
Inventors:
Christophe Delfino, Amal Plaudet-Hammani
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating customized recommendations for environmentally-conscious cloud computing frameworks for replacing computing resources of existing datacenters. One of the methods involves receiving, through a user interface presented on a display of a computing device, data regarding a user's existing datacenter deployment and the user's preferences for the new cloud computing framework, generating one or more recommendations for environmentally-conscious cloud computing frameworks based on the received data, and presenting such recommendations through the user interface for the user's review and consideration.
Type:
Grant
Filed:
July 28, 2021
Date of Patent:
August 22, 2023
Assignee:
Accenture Global Solutions Limited
Inventors:
Vibhu Sharma, Vikrant Kaulgud, Mainak Basu, Sanjay Podder, Kishore P. Durg, Sundeep Singh, Rajan Dilavar Mithani, Akshay Kasera, Swati Sharma, Priyavanshi Pathania, Adam Patten Burden, Pavel Valerievich Ponomarev, Peter Michael Lacy, Joshy Ravindran
Abstract: An optical system includes an eye-tracker and a head-mounted display for providing accurate eye-tracking in interactive virtual environment. The eye-tracker includes a sensor module captures one or multiple eye images of a user. The head-mounted display includes a processor and a display for presenting the user interface. The processor provides a user interface which includes one or multiple UI elements based on one or multiple gaze points of the user which are computed based on the one or multiple eye images, acquires the distance between an estimated gaze point of the user and each UI element, acquires the score of each UI element based on the distance between the estimated gaze point and each UI element, and sets a specific UI element with a highest score as the target UI element associated with the estimated gaze point of the user.
Abstract: Techniques for moving about a computer simulated reality (CSR) setting are disclosed. An example technique includes displaying a current view of the CSR setting, the current view depicting a current location of the CSR setting from a first perspective corresponding to a first determined direction. The technique further includes displaying a user interface element, the user interface element depicting a destination location not visible from the current location, and, in response to receiving input representing selection of the user interface element, modifying the display of the current view to display a destination view depicting the destination location, wherein modifying the display of the current view to display the destination view includes enlarging the user interface element.
Abstract: Techniques are described herein for providing search features within a carousel. A request may be received to display a network page (e.g., a user profile page). The carousel may present a subset of items of a set of items (e.g., items associated with the user profile). User input indicating a scrolling action within the carousel can be received. In response, a user interface (UI) element associated with conducting a search may be presented in an expanded form overlaid atop the carousel. After a predefined period of time has elapsed, the UI element may transition to a collapsed form. If the UI element is selected, the user may be navigated to the end of the carousel where a statically-positioned presentation of the UI element is presented. A search may be conducted from the statically-positioned presentation. The search may be performed against the set of items associated with the user profile.
Abstract: An example method includes receiving, at a first computing device, market data related to a plurality of tradeable objects. The example method includes displaying, via an interface, the received market data via at the first computing device. The interface is based on an interface object model including a plurality of data components corresponding to the received market data. The example method includes receiving an input selection to share the interface with a second computing device and generating a transfer object model based on the interface object model in response to the receipt of the input selection. The example method includes identifying a first group of the plurality of data components included in the transfer object model and redacting the first group of the plurality of data components corresponding to the received market data components. The example method includes transmitting the redacted transfer object model to the second computing device.
Type:
Grant
Filed:
June 29, 2021
Date of Patent:
July 4, 2023
Assignee:
Trading Technologies International, Inc.
Abstract: Apparatus and methods for implementing high-speed Ethernet links using a hybrid PHY (Physical layer) selectively configurable to employ a non-interleaved RS-FEC (Reed Solomon Forward Error Correction) sublayer or an interleaved RS-FEC sublayer. An adaptive link training protocol is used during link training to determine whether to employ the non-interleaved or interleaved RS-FEC during link DATA mode. Training frames are exchanged between link partners including control and status fields used to respectfully request a non-interleaved or interleaved FEC mode and confirm the requested FEC mode is to be used during link DATA mode. The hybrid PHY includes interleaved RS-FEC and non-interleaved RS-FEC sublayers for transmitter and receiver operations. During link training, a determination is made to whether a local receiver is likely to see decision feedback equalizer (DFE) burst errors. If so, the interleaved FEC mode is selected; otherwise the non-interleaved FEC mode is selected or is the default FEC mode.
Abstract: A method and a system for influencing an optical output of image data on an output device in a vehicle. A viewing direction of a driver of the vehicle is determined. If the determined viewing direction is directed to the output device, the optical output of the image data is faded down with an average fading out rate DOWN. The fading out rate DOWN defines a temporal decrease in the optical perceptibility of the output image data by a human. If the determined viewing direction is directed towards the output device and is directed away from the output device, the optical output of the image data is faded up with an average fading in rate UP. The fading in rate UP defines a temporal increase of the optical perceptibility of the output image data by a human.
Type:
Grant
Filed:
March 16, 2020
Date of Patent:
June 6, 2023
Assignee:
MERCEDES-BENZ GROUP AG
Inventors:
Thomas Schmid, Holger Enzmann, Christian Patzelt, Gerd Gulde, Nina Hallier
Abstract: An online system receives, from a client device of a user participating an online meeting, a request to share content with another user participating the online meeting. The online system identifies content items from applications running on the client device. For each identified content item, the online system determines, e.g., by using a trained model or instructing the client device to use the trained model, a sharing score that indicates a likelihood that the user would select the content item to share with the other user. Candidate content items are selected form the identified content items based on the sharing scores. The candidate content items are provided for display to the user in a user interface running on the client device. The user interface allows the user to select one of the candidate content items to share with the other user.
Abstract: Method, apparatus and computer program product embodiments are provided for distributing and installing content and settings on client devices without receiving any user input at the client devices, which limit usage of the client device a user at the client device to a first set of the usage activities. A device controller may remotely configure and control client devices by providing instructions and content for distribution to the client devices. The instructions may cause the client devices to install the content on the client devices without requiring any user input to initiate the installation or during installation of the content. The client device may be further configured to allow management of the client device by the device controller.
Type:
Grant
Filed:
March 22, 2022
Date of Patent:
May 9, 2023
Assignee:
Elo Touch Solutions, Inc.
Inventors:
Kenneth North, Ragini Rajendra Prasad, Michael James Power, Haroun Ansari Mohammed Ansari, Neeraj Pendse
Abstract: A method of obtaining a 3D model of a subject includes capturing a plurality of images of the subject by an image capture device, associating respective ones of the plurality of images with a timestamp and unique identification number, transmitting image capture data comprising the timestamp and the unique identification number to an external server, receiving a transmission comprising the unique identification number, and, responsive to receiving the transmission, transmitting a first image of the plurality of images that is associated with the unique identification number.
Abstract: Briefly, embodiments of methods and/or systems of generating preference indices for contiguous portions of digital images are disclosed. For one embodiment, as an example, parameters of a neural network may be developed to generate object labels for digital images. The developed parameters may be transferred to a neural network utilized to generate signal sample value levels corresponding to preference indices for contiguous portions of digital images.
Type:
Grant
Filed:
March 20, 2017
Date of Patent:
May 9, 2023
Assignee:
Yahoo Assets LLC
Inventors:
Suleyman Cetintas, Kuang-chih Lee, Jia Li
Abstract: Provided is an apparatus comprising: a determination unit configured to determine a first region to be viewed by a user in a display screen; a detection unit configured to detect a user's visual line; and a display control unit configured to change a display mode of the first region in response to the first region being not viewed.
Abstract: A method, apparatus, and computer program product are therefore provided for providing natural guidance using one or more location graphs based on a context of a user. Methods may include: receiving an indication of a location of a user; identifying a location graph of location objects proximate the location of the user; establishing a context of the user; establishing a path among the location objects of the location graph based, at least in part, on the context of the user; generating natural language guidance based on the path among the location objects; and providing natural language guidance to the user. The location of a user may include a location along a route between an origin and a destination, where identifying a location graph of location objects may include identifying a location graph of location objects proximate the route between the origin and the destination.
Abstract: There is provided a control device including a display control unit configured to cause display of a display unit to shift between a first display mode in which a first kind of image having a predetermined relation with an electric-to-electric (EE) image acquired by an imaging unit and a second kind of image different from the first kind of image are displayed and a second display mode in which the first kind of image is not displayed and the second kind of image is displayed, and a power control unit configured to control power supply to the imaging unit in the shift between the first and second display modes.
Abstract: A method, a computer program, or a computerized system for monitoring graphical content of a screen display by the actions of receiving, by a first software program executed by a processor of a computerized device communicatively coupled to a first server via a communication network, at least one parameter characterizing a graphical object; monitoring a stream of data received by a second software program executed by the processor of the computerized device communicatively coupled to a second server via the communication network, to capture the at least one parameter characterizing a graphical object; monitoring a stream of data between the second software program and a screen display of the computerized device to capture at least one graphical object associated with the at least one parameter characterizing the graphical object; and capturing the graphical object.
Abstract: An application management apparatus (103) acquires a use application program from a storage region when either one of an information processing apparatus (1)(101) and an information processing apparatus (2)(102) starts a startup process, the use application program being an application program to be used by a user of a start-up information processing apparatus being the information processing apparatus which starts the startup process. The application management apparatus (103) transmits the acquired use application program to the start-up information processing apparatus. Each of the information processing apparatus (1)(101) and the information processing apparatus (2)(102) receives the us application program which is transmitted from the application management apparatus (103), and start the received use application program and complete the startup process.
Abstract: In an information processing device, a controller displays a database image in a first display region on a display at a first display scale. The database image represents at least partial data included in a database. The controller receives a specific operation, and displays a partial enlarged image, in a second display region on the display at a second display scale greater than the first display scale in response to reception of the specific operation. The partial enlarged image corresponds to an extraction image in a partial extraction region in the database image and is an enlarged image of the extraction image so that the partial enlarged image is displayed in the second display region at the second display scale. The second display region overlaps at most a portion of the first display region. The controller generates print data including at least partial data included in the database.
Abstract: Contextual paste target prediction is used to predict one or more target applications for a paste action, and do so based upon a context associated with the content that has previously been selected and copied. The results of the prediction may be used to present to a user one or more user controls to enable the user to activate one or more predicted application, and in some instances, additionally configure a state of a predicted application to use the selected and copied content once activated. As such, upon completing a copy action, a user may, in some instances, be provided with an ability to quickly switch to an application into which the user was intending to paste the content. This can provide a simpler user interface in a device such as phones and tablet computers with limited display size and limited input device facilities. It can result in a paste operation into a different application with fewer steps than is possible conventionally.
Abstract: A system for providing context tree based on data model is disclosed. The system comprises an interface, a processor, and a memory. The interface is configured to receive a data model entry point, and to receive one or more context filters. The processor is configured to determine context tree data based on the one or more context filters and the data model entry point from any context tree provider that has appropriate context tree information. The memory is coupled to the processor and is configured to provide the processor with instructions.
Type:
Grant
Filed:
May 21, 2019
Date of Patent:
January 31, 2023
Assignee:
OPEN TEXT CORPORATION
Inventors:
Muthukumarappa Jayakumar, Satyapal P. Reddy, Ravikumar Meenakshisundaram
Abstract: A computing device is provided comprising a processor, a first display device having a first capacitive touch sensor, a second display device having a second capacitive touch sensor, and a hinge positioned between and coupled to each of the first display device and the second display device, the first display device and second display device being rotatable about the hinge and separated by a hinge angle. The processor is configured to detect the hinge angle at a first point in time, determine that the hinge angle at the first point in time is outside a first predetermined range, and upon at least determining that the hinge angle is outside the first predetermined range, perform run-time calibration of at least a plurality of rows of the capacitive touch sensor of the first display device and of the capacitive touch sensor of the second display device.
Abstract: An electronic gaming machine architecture is provided in which a gaming platform application and wagering game applications are executed in separate processes but may share access to common display windows; such display windows may be caused to be generated by the gaming platform application, which may then assign specific display windows to the various wagering game applications, along with window handles usable to direct graphical content thereto. The wagering game applications may then direct graphical content to the display windows, while the gaming platform application may retain control over the size, position, transparency, and/or z-order of the display windows.
Type:
Grant
Filed:
November 16, 2020
Date of Patent:
January 24, 2023
Assignee:
Aristocrat Technologies Australia Pty Limited
Abstract: Various embodiments of the present invention provide a method and a device comprising a memory and a processor, wherein the processor is configured to: identify a context on the basis of at least one of a time, a location, and a use pattern; provide a notification associated with the identified context; detect a user input for selecting the provided notification; and provide recommendation information or configuration information associated with the context. Various embodiments are possible.
Abstract: Provided is an information processing system including: a feature information accepting unit that, by causing a user to specify a position in a feature map in which a plurality of feature images indicating features to be displayed are arranged, accepts entry of feature information associated with the position in the feature map; an image acquisition unit that acquires an image used for display based on the feature information; and a display information generation unit that generates display information used for causing a display device to display the image used for display.
Abstract: An electronic device is provided. The electronic device includes a communication circuit, a display, a memory including a first display driver, a processor functionally connected with the communication circuit, the display, and the memory, and a secure module which is physically separated from the processor, and includes a secure processor and a second display driver, and the secure processor is configured to: when secure data is received from an external server through the communication circuit, disable the first display driver and enable the second display driver, and output a user interface including a first object corresponding to the secure data to the display by using the enabled second display driver.
Type:
Grant
Filed:
April 7, 2021
Date of Patent:
January 10, 2023
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Bumhan Kim, Taehoon Kim, Jonghyeon Lee, Dasom Lee
Abstract: The techniques disclosed herein provide a system that can generate targeted positioning of message content for multi-user communication interfaces. In some configurations a system may generate a user interface that displays a number of video stream renderings, wherein individual video stream renderings, e.g., thumbnail views, show a participant of a communication session. When one of the participants sends the user a private message, the system renders at least a portion of the private message in a semi-transparent format as an overlay on the video rendering of the sender. This allows a traditional video stream interface to also function as an organizer for private messages sent to a particular user. This user interface format allows a user to readily identify a broader perspective of chat activity without requiring a user to enter specific chat user interfaces or provide a number of manual input entries to view the private chat content.
Abstract: A method includes displaying, by an electronic device, a graphical user interface (GUI) of a first application on a touchscreen of the electronic device, detecting, by the electronic device, a screenshot operation from a user, taking, by the electronic device, a screenshot of the GUI in response to the screenshot operation, displaying, on the touchscreen, a first preview image corresponding to an obtained first screenshot, detecting, by the electronic device, a first touch operation on the first preview image, updating the first preview image to a second preview image in response to the first touch operation, and displaying the second preview image on the touchscreen.
Abstract: An information processing apparatus (2000) includes a summarizing unit (2040) and a display control unit (2060). The summarizing unit (2040) obtains a video (30) generated by each of a plurality of cameras (10). Furthermore, the summarizing unit (2040) performs a summarizing process on the video (30) and generates summary information of the video (30). The display control unit (2060) causes a display system (20) to display the video (30). Here, the display control unit (2060) causes the display system (20) to display the summary information of the video (30) in response to that a change in a display state of the video (30) the display system (20) satisfies a predetermined condition.
Abstract: Systems and methods provide for configuring and transferring multiple data files including image data files using a mobile device. A mobile device may acquire multiple data files including image files from disparate sources and transmit them to an enhanced image processing server. The enhanced image processing server may analyze the received image files using various techniques. To aid in analysis, the server may also interface with various internal and external databases storing reference images or other reference data of previously analyzed similar data. Further still, the enhanced image processing server may transmit a result of the analysis back to a mobile device.
Type:
Grant
Filed:
December 9, 2016
Date of Patent:
December 20, 2022
Assignee:
ALLSTATE INSURANCE COMPANY
Inventors:
Jennifer A. Brandmaier, Mark E. Faga, Robert H. Johnson, Daniel Koza, William Loo, Clint J. Marlow, Kurt M. Stricker
Abstract: Systems and methods provide for an automated system for analyzing damage and processing claims associated with an insured item, such as a vehicle. An enhanced claims processing server may analyze damage associated with the insured item using photos/video transmitted to the server from a user device (e.g., a mobile device). The mobile device may receive feedback from the server regarding the acceptability of submitted photos/video, and if the server determines that any of the submitted photos/video is unacceptable, the mobile device may capture additional photos/video until all of the data are deemed acceptable. To aid in damage analysis, the server may also interface with various internal and external databases storing reference images of undamaged items and cost estimate information for repairing previously analyzed damages to similar items. Further still, the server may generate a payment for compensating a claimant for repair of the insured item.
Type:
Grant
Filed:
May 5, 2014
Date of Patent:
December 20, 2022
Assignee:
Allstate Insurance Company
Inventors:
Jennifer A. Brandmaier, Mark E. Faga, Robert H. Johnson, Daniel Koza, William Loo, Clint J. Marlow, Kurt M. Stricker
Abstract: The embodiments of the present disclosure provide an object control method and a terminal device. The method includes: receiving a user's first input on a target manipulation control and a first object in a first screen, where an object in the target manipulation control is an object in a second screen, and the second screen is a screen, among at least two screens, other than the first screen; and executing, on the first screen and in response to the first input, a first action corresponding to the first input on the first object, where the first object is an object in the target manipulation control or an object in a target area, and the target area is an area on the first screen other than an area where the target manipulation control is located. The method may be applied to an object control scenario of a multi-screen terminal device.
Abstract: One variation of a method for assisting execution of manual protocols at production equipment includes: identifying a site occupied by a mobile device based on a geospatial location of a device; identifying a space within the building occupied by the device based on identifiers of a set of wireless access points wirelessly accessible to the device and known locations of wireless access points within the building; loading a protocol associated with an equipment unit in the space; calculating a position of the device within the space based on positions optical features, detected in a field of view of an optical sensor at the device, relative to reference features represented in a space model of the space; and, when the position of the device falls within a threshold distance of a reference location proximal the equipment unit defined in a step of the procedure, rendering guidance for the step.
Type:
Grant
Filed:
May 29, 2019
Date of Patent:
December 13, 2022
Assignee:
Apprentice FS, Inc.
Inventors:
Frank Maggiore, Angelo Stracquatanio, Nabil Hajj Chehade
Abstract: This application provides a touch control method and an apparatus, and relates to the field of communications technologies. The method includes: obtaining, by a terminal, a first touch operation entered by a user on a touchscreen; and mapping, by the terminal when the first touch operation is performed on a first preset area in a target interface, the first touch operation to a second touch operation, so that a target application responds to the second touch operation, instead of the first touch operation. The target interface is any interface that is presented by the target application and covers the first preset area, and the target application is an application running in the foreground.
Abstract: An apparatus, computer-readable medium, and computer-implemented method for transforming a hierarchical document object model (DOM) to filter non-rendered elements, including parsing elements in a hierarchical DOM to identify one or more tags, any properties, and any values of the elements, removing invisible elements determined based on properties of each invisible element, each invisible element comprising an element of the DOM that is hidden from a user when the DOM is rendered, removing empty elements based on the tags of each element, each empty element comprising a tag without any associated values, identifying remaining elements of the hierarchical DOM that have parent elements that have been removed from the hierarchical DOM, and re-parenting the remaining elements to new parent elements remaining in the hierarchical DOM based on traversing the hierarchical DOM from each of the remaining elements.
Abstract: A User Interface (UI) interface object detection system employs an initial dataset comprising a set of images, that may include synthesized images, to train a Machine Learning (ML) engine to generate an initial trained model. A data point generator is employed to generate an updated synthesized image set which is used to further train the ML engine. The data point generator may employ images generated by an application program as a reference by which to generate the updated synthesized image set. The images generated by the application program may be tagged in advance. Alternatively, or in addition, the images generated by the application program may be captured dynamically by a user using the application program.
Abstract: A method comprises: at a computer device configured with user applications grouped in multiple virtual desktops hosted on and displayed by the computer device: establishing an online meeting with remote computer devices over a network; responsive to user input, selecting one of the multiple virtual desktops to be a shared virtual desktop, such that all other ones of the multiple virtual desktops become unshared virtual desktops; sharing, with the remote computer devices, the shared virtual desktop, including first user applications of the user applications that are grouped in the shared virtual desktop; and not sharing, with the remote computer devices, any of the unshared virtual desktops and second user applications of the user applications that are grouped in the unshared virtual desktops.
Abstract: A video conferencing system includes a multi-user interaction slate for the execution of applications having a state that is responsive to inputs from multiple attendees of a video conference. The video conferencing system includes a graphical user interface having video slates provided for video streams and multi-user interaction slates for the execution of code that is responsive to inputs provided at multiple client devices. The video conferencing system can determine a current state of a multi-user interaction slate in response to inputs provided by users of the client devices in association with the multi-user interaction slate. The video conferencing system can provide data for rendering the graphical user interface, including video data associated with the video slates and data indicative of the current state of the multi-user interaction slate.
Type:
Grant
Filed:
March 22, 2021
Date of Patent:
November 22, 2022
Assignee:
GOOGLE LLC
Inventors:
Kevin Jonathan Jeyakumar, Carrie Christina Merry Barkema
Abstract: Implementations for providing services to a constrained environment are described. A user interface may be provided for editing files. The files or data structure storing the files may exceed a storage limitation associated with the user interface. The user interface may represent the data structure comprising the files while only storing data indicating modifications to the file or data structure.
Abstract: A virtual make-up apparatus and method: store cosmetic item information of cosmetic items of different colors; store a different texture component for each stored cosmetic item of a specific color; extract an object portion image of a virtual make-up from a facial image; extract color information from the object portion image; designate an item of the virtual make-up corresponding to a stored cosmetic item and output a color image by applying a color corresponding to the designated item on the object portion image; output a texture image, based on analyzed color information corresponding to a stored cosmetic item, by adding a texture component to a part of the object portion image; and display a virtual make-up image of virtual make-up using the designated item applied on the facial image, by using the color and texture images, and the object portion image of the virtual make-up of the facial image.
Abstract: The present disclosure relates to a method for collecting data from a number of vehicles, in which at least one data collection device of a respective vehicle can be configured to read, store and transmit information recorded by respective vehicle sensors to a server. The at least one data collection device of a respective vehicle can also be configured by means of a control command to generate and transmit to a server a data set using selected values from a number of selected vehicle sensors, and in which the control command is generated by a central control device and is transmitted to the at least one data collection device.