Abstract: Methods and apparatus for systems using a scene codec are described, where systems are either providers or consumers of multi-way, just-in-time, only-as-needed scene data including subscenes and subscene increments. An example system using a scene codec includes a plenoptic scene database containing one or more digital models of scenes, where representations and organization of representations are distributable across multiple systems such that collectively the multiplicity of systems can represent scenes of almost unlimited detail. The system may further include highly efficient means for the processing of these representation and organizations of representation providing the just-in-time, only-as-needed subscenes and scene increments necessary for ensuring a maximally continuous user experience enabled by a minimal amount of newly provided scene information, where the highly efficient means include a spatial processing unit.
Type:
Grant
Filed:
May 2, 2019
Date of Patent:
January 16, 2024
Inventors:
David Scott Ackerson, Donald J. Meagher, John K Leffingwell
Abstract: Provided is an image processing apparatus that performs image processing on original image data, the apparatus including a communication unit that performs communication with a terminal, a storage that stores parameter information indicating a relationship between image processing-related information transmitted from the terminal and an image processing parameter, and a processor, in which the processor acquires the original image data, performs, in a case in which an image processing request including the image processing-related information is received from the terminal via the communication unit, the image processing on the original image data using the image processing parameter corresponding to the image processing-related information based on the parameter information, and transmits image data after the image processing to the terminal via the communication unit.
Abstract: A system that includes artificial intelligence (AI) configured to identify text and images within an industrial reference. Example industrial references include electrical drawings and P&IDs. The system includes a method for training artificial intelligence model to recognize text characters and strings in addition to industrial images using a limited sample set. The use of a limited sample set improves computer performance by relying on a smaller dataset to train the model.
Abstract: Atomic position data may be obtained from x-ray diffraction data. The x-ray diffraction data for a sample may be squared and/or otherwise operated on to obtain input data for a neural network. The input data may be input to a trained convolutional neural network. The convolutional neural network may have been trained based on pairs of known atomic structures and corresponding neural network inputs. For the neural network input corresponding to the sample and input to the trained convolutional neural network, the convolutional neural network may obtain an atomic structure corresponding to the sample.
Abstract: Provided are aspects relating to methods and computing devices for allocating computing resources and selecting hyperparameter configurations during continuous retraining and operation of a machine learning model. In one example, a computing device configured to be located at a network edge between a local network and a cloud service includes a processor and a memory storing instructions executable by the processor to operate a machine learning model. During a retraining window, a selected portion of a video stream is selected for labeling. At least a portion of a labeled retraining data set is selected for profiling a superset of hyperparameter configurations. For each configuration of the superset of hyperparameter configurations, a profiling test is performed. The profiling test is terminated, and a change in inference accuracy that resulted from the profiling test is extrapolated. Based upon the extrapolated inference accuracies, a set of selected hyperparameter configurations is output.
Abstract: A printing apparatus management server manages a plurality of printing apparatuses, obtains, in response to a request from a voice device management server, information of a predetermined printing apparatus registered, from the plurality of printing apparatuses, as a printing apparatus to be used in the printing apparatus management server, transmits, to the voice device management server, the obtained information of the predetermined printing apparatus, obtains remaining amount information transmitted from the predetermined printing apparatus, and transmits the obtained remaining amount information to the consumable item management server. Voice notification to a user is performed, via a voice device configured to communicate with the voice device management server, based on the obtained remaining amount information, and processing for placing an order for the consumable item is performed based on a voice instruction from the user accepted by the voice device.
Abstract: Apparatuses, systems, and techniques to train a generative model based at least in part on a private dataset. In at least one embodiment, the generative model is trained based at least in part on a differentially private Sinkhorn algorithm, for example, using backpropagation with gradient descent to determine a gradient of a set of parameters of the generative models and modifying the set of parameters based at least in part on the gradient.
Abstract: A method for extracting a video clip includes obtaining a video and splitting the video into multiple clips. The multiple clips are input into a pre-trained scoring model, to obtain a score of each of the multiple clips. The scoring model is obtained by training based on data pairs including first clips and second clips, the data pairs including first clips and second clips are obtained based on labeled clips labeled with target attributes, and the target attributes include attributes that characterize clips as target clips or non-target clips. A target clip is extracted from the multiple clips based on the score of each of the multiple clips. An apparatus for extracting a video clip and a computer readable storage medium are also disclosed.
Abstract: According to various embodiments, a system and method for processing an image of a biological specimen stained for a presence of at least one lymphocyte biomarker is disclosed. The system and method are configured to detect lymphocytes in the image and compute a foreground segmentation mask based on the lymphocytes detected within the image. Outlines of the detected lymphocytes are identified in the image by filtering the image with the computed foreground segmentation mask. A shape metric may be derived for each of the detected lymphocytes based on the identified lymphocytes outlines. The derived shape metric may be associated with location information for each of the detected lymphocytes and a value of each of the derived shape metric may be compared to a predetermined threshold value. A predictive cell motility label may be assigned to each of the detected lymphocytes based on the comparison.
Type:
Grant
Filed:
October 9, 2020
Date of Patent:
December 12, 2023
Assignee:
VENTANA MEDICAL SYSTEMS, INC.
Inventors:
Joerg Bredno, Konstanty Korski, Oliver Grimm
Abstract: Correction content is made learnable based on a correction operation performed by a user on an attribute setting screen in setting attribute information, such as a filename, based on a character string obtained by character recognition processing on a scan image.
Abstract: In implementations of systems for generating occurrence contexts for objects in digital content collections, a computing device implements a context system to receive context request data describing an object that is depicted with additional objects in digital images of a digital content collection. The context system generates relationship embeddings for the object and each of the additional objects using a representation learning model trained to predict relationships for objects. A relationship graph is formed for the object that includes a vertex for each relationship between the object and the additional objects indicated by the relationship embeddings. The context system clusters the vertices of the relationship graph into contextual clusters that each represent an occurrence context of the object in the digital images of the digital content collection.
Type:
Grant
Filed:
October 26, 2020
Date of Patent:
December 5, 2023
Assignee:
Adobe Inc.
Inventors:
Manoj Kilaru, Vishwa Vinay, Vidit Jain, Shaurya Goel, Ryan A. Rossi, Pratyush Garg, Nedim Lipka, Harkanwar Singh
Abstract: A printing apparatus includes a reception unit configured to receive a print job, and an execution unit configured to execute printing of the print job received by the reception unit. The execution unit has a function of, in a case where a setting time or more elapses with a particular cause preventing the printing of the print job not being removed, canceling the printing of the print job. The execution unit has a function of canceling printing of a new print job received by the reception unit in a state where the setting time or more elapses with the particular cause not being removed.
Abstract: An example system may include a processor and a non-transitory machine-readable storage medium storing instructions executable by the processor to determine, during an imaging operation by an imaging component of a printing device, whether a print queue, supplied by the imaging component, will be depleted by a printing component of the printing device prior to a completion of the imaging operation; and adjust, based on the determination, a media feed rate of the printing component.
Type:
Grant
Filed:
April 21, 2020
Date of Patent:
November 28, 2023
Assignee:
Hewlett-Packard Development Company, L.P.
Inventors:
Dean J. Richtsmeier, Brian C. Mayer, Kenneth Scott Line
Abstract: An image forming apparatus includes a controller configured to obtain PJL data via a data interface, obtain filter data via the data interface, store the obtained filter data in a non-volatile memory, the filter data associating non-target PJL data with target PJL data, the non-target PJL data being PJL data not intended for causing the image forming apparatus to a particular process, the target PJL data being PJL data intended for causing the image forming apparatus to perform the particular process, and when the obtained PJL data is the non-target PJL data associated with the target PJL data in the filter data stored in the non-volatile memory, convert the obtained PJL data into the target PJL data associated with the non-target PJL data in the filter data stored in the non-volatile memory.
Abstract: A three-dimensional reconstruction method based on half-peak probability density distribution, including: slicing three-dimensional point cloud along Z-axis direction to obtain N spatial layers; extracting the scatter information in i-th spatial layer and projecting information to Zi plane; constructing membership function of each grid and scatter in the Zi plane and drawing a three-dimensional probability density plot; making a plane parallel to XOY plane through half-peak wmax/2 of three-dimensional probability density plot, parallel intersecting a three-dimensional probability density plot to obtain a contour LXY; superimposing radioactive source reconstruction contours corresponding to N spatial layers sequentially to obtain a three-dimensional reconstruction model of a radioactive source.
Abstract: A system and method for rich content transformation are provided. The system and method allow rich content transformation to be separately processed on a client device and on a cloud-based server. The client device downsizes a rich content and transmits the downsized rich content to the cloud-based server via a network. The cloud-based server calculates function parameters based on the downsized rich content using one or more machine learning models included in the server. The calculated function parameters are transmitted to the client device via the network. The client device then applies these function parameters to the rich content on the client device to obtain the transformed rich content.
Abstract: An image forming apparatus includes an image forming portion configured to form an image on a sheet; and an operating portion including a display screen which includes at least numeric keys, a start key configured to cause the image forming portion to start image formation, and a stop key configured to stop the image formation by the image forming portion, and including a hardware key provided on a back side of the display screen and configured to start a maintenance mode of the image forming apparatus.
Abstract: An image processing device includes: one or more processors including hardware. The one or more processors are configured to: calculate, on a basis of a reference image acquired by capturing an image of a reference subject which has an optical characteristic that is equivalent to at least a part of a living body, image transformation parameters through congruence transformation that transforms a coordinate corresponding to a color of target region included in the reference image defined in a color space into a coordinate corresponding to an achromatic color in the color space; and perform, in the color space on a basis of the calculated image transformation parameters, the congruence transformation of colors of a color image acquired by capturing an image of the living body, the color image being constituted by at least two monochromatic image corresponding to different illumination having different center wavelengths.
Abstract: In classifying images by machine learning, provided are an image classification method, device, and program for classifying the image from which the feature difference is hardly detected, in particular, classifying the interference fringe image of tear fluid layer by the dry eye types. The method includes a step of acquiring a feature value from an interference fringe image of tear fluid layer for learning, a step of constructing a model for classifying an image from the feature value acquired from the interference fringe image of tear fluid layer for learning, a step of acquiring the feature value from an interference fringe image of tear fluid layer for testing, and a step of performing classification processing for classifying the interference fringe image of tear fluid layer for testing by types of dry eye using the model and the feature value acquired from the interference fringe image of tear fluid layer.
Abstract: This application provides a method for transmitting face image data and transferring value, apparatuses, an electronic device, and a storage medium, which belongs to the field of network technologies. The method for transmitting face image data includes acquiring a face data stream through the sensor, and transmitting the face data stream to the first processor; performing image screening on a face image in the face data stream by the first processor to obtain at least a target face image, the target face image meeting a target condition; retrieving a target web address from the memory by the first processor; and transmitting the target face image to the target web address by the first processor.
Type:
Grant
Filed:
November 1, 2021
Date of Patent:
October 10, 2023
Assignee:
TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
Inventors:
Shaoming Wang, Zhijun Geng, Jun Zhou, Runzeng Guo