IMAGE ARTIFACT REDUCTION USING FILTER DATA BASED ON DEEP IMAGE PRIOR OPERATIONS

A method includes obtaining first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data. The method also includes performing a plurality of deep image prior operations, using an image prior based on the first image data, to generate filter data. The method further includes modifying the first image data based on the filter data to generate second image data. The method also includes performing an artifact reduction process based on the second image data to generate third image data.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application claims priority from U.S. Provisional Patent Application No. 63,255,322, entitled “ARTIFACT REDUCTION FOR SOLUTIONS TO INVERSE PROBLEMS”, filed Oct. 13, 2021, and claims priority from U.S. Provisional Patent Application No. 63,263,532, entitled “ARTIFACT REDUCTION FOR SOLUTIONS TO INVERSE PROBLEMS”, filed Nov. 4, 2021, and claims priority from U.S. Provisional Patent Application No. 63,362,787, entitled “DATA SELECTION FOR IMAGE GENERATION”, filed Apr. 11, 2022, and claims priority from U.S. Provisional Patent Application No. 63,362,789, entitled “IMAGE ARTIFACT REDUCTION USING FILTER DATA BASED ON DEEP IMAGE PRIOR OPERATIONS”, filed Apr. 11, 2022, and claims priority from U.S. Provisional Patent Application No. 63,362,792, entitled “RELIABILITY FOR MACHINE-LEARNING BASED IMAGE GENERATION”, filed Apr. 11, 2022, the contents of each of which are incorporated herein by reference in their entirety.

FIELD

The present disclosure is generally related to using filter data based on deep image prior operations to reduce artifacts in images.

BACKGROUND

Conceptually, a “forward problem” attempts to make a prediction based on a model of causal factors associated with a system and initial conditions of the system. An “inverse problem” reverses the forward problem by attempting to model causal factors and initial conditions based on data (e.g., measurements or other observations of the system). Stated another way, an inverse problem starts with the effects (e.g., measurement or other data) and attempts to determine model parameters, whereas the forward problem starts with the causes (e.g., a model of the system) and attempts to determine the effects. Inverse problems are used for many remote sensing applications, such as radar, sonar, medical imaging, computer vision, seismic imaging, etc.

Optimization techniques are commonly used to generate solutions to inverse problems. For example, with particular assumptions about a system that generated a set of return data, a reverse time migration technique can be used to generate image data representing the system. However, images generated using such techniques generally include artifacts. Such artifacts can be reduced by increasing the quantity of data used to generate the solution; however, generating more data is costly and time consuming. Furthermore, the computing resources required to perform optimization increase dramatically as the amount of data increases.

SUMMARY

According to a particular aspect, a method includes obtaining, by one or more processors, first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data. The method also includes performing, by the one or more processors, a plurality of deep image prior operations, using a reference image (also referred to herein as an “image prior”) based on the first image data, to generate filter data. The method further includes modifying, by the one or more processors, the first image data based on the filter data to generate second image data. The method also includes performing, by the one or more processors, an artifact reduction process based on the second image data to generate third image data.

According to a particular aspect, a system includes one or more processors configured to obtain first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data. The one or more processors are also configured to perform a plurality of deep image prior operations, using an image prior based on the first image data, to generate filter data. The one or more processors are further configured to modify the first image data based on the filter data to generate second image data. The one or more processors are also configured to perform an artifact reduction process based on the second image data to generate third image data. Artifacts are reduced in the third image data relative to the first image data.

According to a particular aspect, a computer-readable storage device stores instructions that, when executed by one or more processors, cause the one or more processors to obtain first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data. The instructions also cause the one or more processors to perform a plurality of deep image prior operations, using an image prior based on the first image data, to generate filter data. The instructions further cause the one or more processors to modify the first image data based on the filter data to generate second image data. The instructions also cause the one or more processors to perform an artifact reduction process based on the second image data to generate third image data. Artifacts are reduced in the third image data relative to the first image data.

According to a particular aspect, a method includes obtaining, by one or more processors, first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data. The method also includes performing, by the one or more processors, an artifact reduction process based on the first image data to generate second image data. The method further includes performing, by the one or more processors, a plurality of deep image prior operations, using an image prior based on the first image data or the second image data, to generate filter data. The method also includes modifying, by the one or more processors, the second image data based on the filter data to generate third image data.

According to a particular aspect, a system includes one or more processors configured to obtain first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data. The one or more processors are also configured to perform an artifact reduction process based on the first image data to generate second image data. The one or more processors are further configured to perform a plurality of deep image prior operations, using an image prior based on the first image data or the second image data, to generate filter data. The one or more processors are also configured to modify the second image data based on the filter data to generate third image data.

According to a particular aspect, a computer-readable storage device stores instructions that, when executed by one or more processors, cause the one or more processors to obtain first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data. The instructions also cause the one or more processors to perform an artifact reduction process based on the first image data to generate second image data. The instructions further cause the one or more processors to perform a plurality of deep image prior operations, using an image prior based on the first image data or the second image data, to generate filter data. The instructions also cause the one or more processors to modify the second image data based on the filter data to generate third image data.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1A is a diagram illustrating a first example of a system that uses filter data based on deep image prior operations for artifact reduction in accordance with some aspects of the present disclosure.

FIG. 1B is a diagram illustrating a second example of a system that uses filter data based on deep image prior operations for artifact reduction in accordance with some aspects of the present disclosure.

FIG. 2 is a diagram illustrating examples of renderings of various data generated by the system of FIG. 1A or the system of FIG. 1B in comparison to a high-quality image generated using a traditional technique.

FIG. 3 illustrates a flow chart of a first example of a method of using filter data based on deep image prior operations for artifact reduction in accordance with some aspects of the present disclosure.

FIG. 4 illustrates a flow chart of a second example of a method of using filter data based on deep image prior operations for artifact reduction in accordance with some aspects of the present disclosure.

FIG. 5 illustrates an example of a computer system configured to use filter data based on deep image prior operations for artifact reduction in accordance with some aspects of the present disclosure.

FIG. 6 is a flow chart of an example of a method of generating image data that includes reducing artifacts in solution data associated with an inverse problem.

FIG. 7 is a flow chart of an example of a method of training a machine-learning model of a gradient descent artifact reduction system.

FIG. 8 is a diagram illustrating particular aspects of determining parameters of a gradient descent artifact reduction system.

FIG. 9A is a diagram illustrating an image generated via reverse time migration based on a large quantity of waveform return data.

FIG. 9B is a diagram illustrating an image generated via reverse time migration based on a much smaller quantity of waveform return data than used for FIG. 9A.

FIG. 9C is a diagram illustrating an image generated, based on the same quantity of waveform return data as used for FIG. 9B, via reverse time migration and gradient descent artifact reduction, according to particular aspects disclosed herein.

FIG. 10A is a diagram illustrating an image generated via reverse time migration based on a large quantity of waveform return data.

FIG. 10B is a diagram illustrating an image generated via reverse time migration based on a much smaller quantity of waveform return data than used for FIG. 10A.

FIG. 10C is a diagram illustrating an image generated, based on the same quantity of waveform return data as used for FIG. 10B, via reverse time migration and gradient descent artifact reduction, according to particular aspects disclosed herein.

DETAILED DESCRIPTION

The present disclosure describes systems and methods that use machine-learning to generate a solution to an inverse problem. In general, operations disclosed herein use machine-learning models, physics-based models, or both, to generate one or more estimated solutions to the inverse problem. A relatively small set of waveform return data is used to generate the estimated solution(s), and as a result, the estimated solution(s) are expected to include artifacts.

The waveform return data includes data based on records (referred to herein as “waveform return records”) for multiple sampling events associated with an observed area. The image data represents a solution to an inverse problem associated with the waveform return data. For example, the waveform return data may be generated by a seismic imaging system that includes one or more sources and one or more receivers. In this example, during a particular sampling event, the source(s) cause one or more waveforms to propagate in the observed area. Subsurface features within the observed area reflect the waveform(s), and the receiver(s) generate measurements (“waveform return measurements”) that indicate, for example, a magnitude of a received waveform return, a timing of receipt of the waveform return, etc. In this example, a waveform return record of the particular sampling event may include the waveform return measurements generated by the receiver(s) for the particular sampling event. Further, in this example, the waveform return data for the particular sampling event may be identical to the waveform return record, or the waveform return data may represent the waveform return record after particular data transformation operations are performed. To illustrate, the waveform return records generally include time-series data, and the waveform return data may include time-series data or may include depth domain data or images based on the time-series data.

Various additional operations are performed to reduce the intensity, distribution, and/or visual impact of the artifacts. In a particular aspect, deep image prior operations are used to generate filter data, which is used to reduce artifacts in the estimated solution(s). As described in more detail below, the deep image prior operations iteratively update a model to try to generate output data that matches a reference image (referred to herein as an “image prior”). In general, earlier iterations of the deep image prior operations tend to generate output data that recreate stronger features (e.g., low frequency and/or high intensity features) of the image prior. Later iterations tend to improve the overall similarity of the output data to the image prior, including recreating weaker features (e.g., higher frequency and/or lower intensity features) of the image prior.

When relatively small sets of waveform return data are subjected to reverse time migration (or other processes to transform the waveform return data to a depth domain), the resulting image data tend to have fairly strong migration swing artifacts (referred to herein simply as “artifacts”). When such an image (e.g., an image having relatively strong artifacts) is used as an image prior for deep image prior operations, the artifacts are recreated in the output data during early iterations. Accordingly, if the deep image prior operations are stopped after an appropriate number of iterations, an output image of the deep image prior operations can be used as filter data to facilitate removal of artifacts from another image. In some situations, preprocessing operations can be used to emphasize the artifacts in the image prior so that the artifacts are even more strongly represented in the filter data.

The filter data is used to modify a preliminary solution to the inverse problem to reduce artifacts in the preliminary solution. In some implementations, the preliminary solution includes image data generated using a physics-based technique, such as reverse time migration. In other implementations, machine-learning techniques are used to generate the preliminary solution to the inverse problem. In some such implementations, the machine-learning techniques may be able to generate high quality solutions (e.g., images with relatively few visible artifacts) using less waveform return data than would be used to generate solutions of similar quality using only physics-based techniques. In some implementations, physics-based techniques and machine-learning techniques are used cooperatively to generate the preliminary solution to the inverse problem.

Generating high quality solutions to inverse problems involves resource intensive calculations. For example, when using reverse time migration, waveform return data from hundreds or even thousands of sampling events (e.g., “shots” in the seismic imaging context) may be used in order to generate a detailed representation of an observed system and to reduce artifacts in the solution.

Machine-learning techniques can generate high-quality solutions using less waveform return data and fewer computing resources (e.g., memory and processor time) than would be used to generate solutions of similar quality using reverse-time migration alone. However, such machine-learning techniques generally use models that are trained or optimized using solutions (e.g., images) generated using reverse-time migration. As a result, the models learn to generate solutions (e.g., images) that include artifacts that are present in the solutions (e.g., images) generated using reverse-time migration.

By using filter data generated by the deep image prior operations to remove artifacts from an estimated solution, a higher quality solution (e.g., a more accurate representation of an observed area) can be generated using less waveform return data than would be used to generate a similar quality solution using traditional techniques. As a result, less waveform return data can be collected (which saves time and sensing resources) and fewer computing resources (e.g., memory used to store waveform return data as well as working memory used to generate the solution, and processor cycles) are used.

Particular aspects of the present disclosure are described below with reference to the drawings. In the description, common features are designated by common reference numbers throughout the drawings. As used herein, various terminology is used for the purpose of describing particular implementations only and is not intended to be limiting. For example, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Further the terms “comprise,” “comprises,” and “comprising” may be used interchangeably with “include,” “includes,” or “including.” Additionally, the term “wherein” may be used interchangeably with “where.” As used herein, “exemplary” may indicate an example, an implementation, and/or an aspect, and should not be construed as limiting or as indicating a preference or a preferred implementation.

As used herein, an ordinal term (e.g., “first,” “second,” “third,” “Nth,” etc.) used to modify an element, such as a structure, a component, an operation, etc., does not by itself indicate any priority or order of the element with respect to another element, but rather merely distinguishes the element from another element having a same name (but for use of the ordinal term). As used herein, the term “set” refers to a grouping of one or more elements, and the term “plurality” refers to multiple elements. Additionally, in some instances, an ordinal term herein may use a letter (e.g., “Nth”) to indicate an arbitrary or open-ended number of distinct elements (e.g., zero or more elements). Different letters (e.g., “N” and “M”) may be used for ordinal terms that describe two or more different elements when no particular relationship among the number of each of the two or more different elements is specified. For example, unless defined otherwise in the text, N may be equal to M, N may be greater than M, or N may be less than M.

In the present disclosure, terms such as “determining,” “calculating,” “estimating,” “shifting,” “adjusting,” etc. may be used to describe how one or more operations are performed. Such terms are not to be construed as limiting and other techniques may be utilized to perform similar operations. Additionally, as referred to herein, “generating,” “calculating,” “estimating,” “using,” “selecting,” “accessing,” and “determining” may be used interchangeably. For example, “generating,” “calculating,” “estimating,” or “determining” a parameter (or a signal) may refer to actively generating, estimating, calculating, or determining the parameter (or the signal) or may refer to using, selecting, or accessing the parameter (or signal) that is already generated, such as by another component or device.

As used herein, “coupled” may include “communicatively coupled,” “electrically coupled,” or “physically coupled,” and may also (or alternatively) include any combinations thereof. Two devices (or components) may be coupled (e.g., communicatively coupled, electrically coupled, or physically coupled) directly or indirectly via one or more other devices, components, wires, buses, networks (e.g., a wired network, a wireless network, or a combination thereof), etc. Two devices (or components) that are electrically coupled may be included in the same device or in different devices and may be connected via electronics, one or more connectors, or inductive coupling, as illustrative, non-limiting examples. In some implementations, two devices (or components) that are communicatively coupled, such as in electrical communication, may send and receive electrical signals (digital signals or analog signals) directly or indirectly, such as via one or more wires, buses, networks, etc. As used herein, “directly coupled” may include two devices that are coupled (e.g., communicatively coupled, electrically coupled, or physically coupled) without intervening components.

As used herein, the term “machine learning” should be understood to have any of its usual and customary meanings within the fields of computers science and data science, such meanings including, for example, processes or techniques by which one or more computers can learn to perform some operation or function without being explicitly programmed to do so. As a typical example, machine learning can be used to enable one or more computers to analyze data to identify patterns in data and generate a result based on the analysis. For certain types of machine learning, the results that are generated include data that indicates an underlying structure or pattern of the data itself. Such techniques, for example, include so called “clustering” techniques, which identify clusters (e.g., groupings of data elements of the data).

For certain types of machine learning, the results that are generated include a data model (also referred to as a “machine-learning model” or simply a “model”). Typically, a model is generated using a first data set to facilitate analysis of a second data set. For example, a first portion of a large body of data may be used to generate a model that can be used to analyze the remaining portion of the large body of data. As another example, a set of historical data can be used to generate a model that can be used to analyze future data.

Since a model can be used to evaluate a set of data that is distinct from the data used to generate the model, the model can be viewed as a type of software (e.g., instructions, parameters, or both) that is automatically generated by the computer(s) during the machine learning process. As such, the model can be portable (e.g., can be generated at a first computer, and subsequently moved to a second computer for further training, for use, or both). Additionally, a model can be used in combination with one or more other models to perform a desired analysis. To illustrate, first data can be provided as input to a first model to generate first model output data, which can be provided (alone, with the first data, or with other data) as input to a second model to generate second model output data indicating a result of a desired analysis. Depending on the analysis and data involved, different combinations of models may be used to generate such results. In some examples, multiple models may provide model output that is input to a single model. In some examples, a single model provides model output to multiple models as input.

Examples of machine-learning models include, without limitation, perceptrons, neural networks, support vector machines, regression models, decision trees, Bayesian models, Boltzmann machines, adaptive neuro-fuzzy inference systems, as well as combinations, ensembles and variants of these and other types of models. Variants of neural networks include, for example and without limitation, prototypical networks, autoencoders, transformers, self-attention networks, convolutional neural networks, deep neural networks, deep belief networks, etc. Variants of decision trees include, for example and without limitation, random forests, boosted decision trees, etc.

Since machine-learning models are generated by computer(s) based on input data, machine-learning models can be discussed in terms of at least two distinct time windows—a creation/training phase and a runtime phase. During the creation/training phase, a model is created, trained, adapted, validated, or otherwise configured by the computer based on the input data (which in the creation/training phase, is generally referred to as “training data”). Note that the trained model corresponds to software that has been generated and/or refined during the creation/training phase to perform particular operations, such as classification, prediction, encoding, or other data analysis or data synthesis operations. During the runtime phase (or “inference” phase), the model is used to analyze input data to generate model output. The content of the model output depends on the type of model. For example, a model can be trained to perform classification tasks or regression tasks, as non-limiting examples. In some implementations, a model may be continuously, periodically, or occasionally updated, in which case training time and runtime may be interleaved or one version of the model can be used for inference while a copy is updated, after which the updated copy may be deployed for inference.

In some implementations, a previously generated model is trained (or re-trained) using a machine-learning technique. In this context, “training” refers to adapting the model or parameters of the model to a particular data set. Unless otherwise clear from the specific context, the term “training” as used herein includes “re-training” or refining a model for a specific data set. For example, training may include so called “transfer learning.” As described further below, in transfer learning a base model may be trained using a generic or typical data set, and the base model may be subsequently refined (e.g., re-trained or further trained) using a more specific data set.

A data set used during training is referred to as a “training data set” or simply “training data”. The data set may be labeled or unlabeled. “Labeled data” refers to data that has been assigned a categorical label indicating a group or category with which the data is associated, and “unlabeled data” refers to data that is not labeled. Typically, “supervised machine-learning processes” use labeled data to train a machine-learning model, and “unsupervised machine-learning processes” use unlabeled data to train a machine-learning model; however, it should be understood that a label associated with data is itself merely another data element that can be used in any appropriate machine-learning process. To illustrate, many clustering operations can operate using unlabeled data; however, such a clustering operation can use labeled data by ignoring labels assigned to data or by treating the labels the same as other data elements.

Machine-learning models can be initialized from scratch (e.g., by a user, such as a data scientist) or using a guided process (e.g., using a template or previously built model). Initializing the model includes specifying parameters and hyperparameters of the model. “Hyperparameters” are characteristics of a model that are not modified during training, and “parameters” of the model are characteristics of the model that are modified during training. The term “hyperparameters” may also be used to refer to parameters of the training process itself, such as a learning rate of the training process. In some examples, the hyperparameters of the model are specified based on the task the model is being created for, such as the type of data the model is to use, the goal of the model (e.g., classification, regression, anomaly detection), etc. The hyperparameters may also be specified based on other design goals associated with the model, such as a memory footprint limit, where and when the model is to be used, etc.

Model type and model architecture of a model illustrate a distinction between model generation and model training. The model type of a model, the model architecture of the model, or both, can be specified by a user or can be automatically determined by a computing device. However, neither the model type nor the model architecture of a particular model is changed during training of the particular model. Thus, the model type and model architecture are hyperparameters of the model and specifying the model type and model architecture is an aspect of model generation (rather than an aspect of model training). In this context, a “model type” refers to the specific type or sub-type of the machine-learning model. As noted above, examples of machine-learning model types include, without limitation, perceptrons, neural networks, support vector machines, regression models, decision trees, Bayesian models, Boltzmann machines, adaptive neuro-fuzzy inference systems, as well as combinations, ensembles and variants of these and other types of models. In this context, “model architecture” (or simply “architecture”) refers to the number and arrangement of model components, such as nodes or layers, of a model, and which model components provide data to or receive data from other model components. As a non-limiting example, the architecture of a neural network may be specified in terms of nodes and links. To illustrate, a neural network architecture may specify the number of nodes in an input layer of the neural network, the number of hidden layers of the neural network, the number of nodes in each hidden layer, the number of nodes of an output layer, and which nodes are connected to other nodes (e.g., to provide input or receive output). As another non-limiting example, the architecture of a neural network may be specified in terms of layers. To illustrate, the neural network architecture may specify the number and arrangement of specific types of functional layers, such as long-short-term memory (LSTM) layers, fully connected (FC) layers, convolution layers, etc. While the architecture of a neural network implicitly or explicitly describes links between nodes or layers, the architecture does not specify link weights. Rather, link weights are parameters of a model (rather than hyperparameters of the model) and are modified during training of the model.

In many implementations, a data scientist selects the model type before training begins. However, in some implementations, a user may specify one or more goals (e.g., classification or regression), and automated tools may select one or more model types that are compatible with the specified goal(s). In such implementations, more than one model type may be selected, and one or more models of each selected model type can be generated and trained. A best performing model (based on specified criteria) can be selected from among the models representing the various model types. Note that in this process, no particular model type is specified in advance by the user, yet the models are trained according to their respective model types. Thus, the model type of any particular model does not change during training.

Similarly, in some implementations, the model architecture is specified in advance (e.g., by a data scientist); whereas in other implementations, a process that both generates and trains a model is used. Generating (or generating and training) the model using one or more machine-learning techniques is referred to herein as “automated model building”. In one example of automated model building, an initial set of candidate models is selected or generated, and then one or more of the candidate models are trained and evaluated. In some implementations, after one or more rounds of changing hyperparameters and/or parameters of the candidate model(s), one or more of the candidate models may be selected for deployment (e.g., for use in a runtime phase).

Certain aspects of an automated model building process may be defined in advance (e.g., based on user settings, default values, or heuristic analysis of a training data set) and other aspects of the automated model building process may be determined using a randomized process. For example, the architectures of one or more models of the initial set of models can be determined randomly within predefined limits. As another example, a termination condition may be specified by the user or based on configurations settings. The termination condition indicates when the automated model building process should stop. To illustrate, a termination condition may indicate a maximum number of iterations of the automated model building process, in which case the automated model building process stops when an iteration counter reaches a specified value. As another illustrative example, a termination condition may indicate that the automated model building process should stop when a reliability metric associated with a particular model satisfies a threshold. As yet another illustrative example, a termination condition may indicate that the automated model building process should stop if a metric that indicates improvement of one or more models over time (e.g., between iterations) satisfies a threshold. In some implementations, multiple termination conditions, such as an iteration count condition, a time limit condition, and a rate of improvement condition can be specified, and the automated model building process can stop when one or more of these conditions is satisfied.

Another example of training a previously generated model is transfer learning. “Transfer learning” refers to initializing a model for a particular data set using a model that was trained using a different data set. For example, a “general purpose” model can be trained to detect anomalies in vibration data associated with a variety of types of rotary equipment, and the general-purpose model can be used as the starting point to train a model for one or more specific types of rotary equipment, such as a first model for generators and a second model for pumps. As another example, a general-purpose natural-language processing model can be trained using a large selection of natural-language text in one or more target languages. In this example, the general-purpose natural-language processing model can be used as a starting point to train one or more models for specific natural-language processing tasks, such as translation between two languages, question answering, or classifying the subject matter of documents. Often, transfer learning can converge to a useful model more quickly than building and training the model from scratch.

Training a model based on a training data set generally involves changing parameters of the model with a goal of causing the output of the model to have particular characteristics based on data input to the model. To distinguish from model generation operations, model training may be referred to herein as optimization or optimization training. In this context, “optimization” refers to improving a metric, and does not mean finding an ideal (e.g., global maximum or global minimum) value of the metric. Examples of optimization trainers include, without limitation, backpropagation trainers, derivative free optimizers (DFOs), and extreme learning machines (ELMs). As one example of training a model, during supervised training of a neural network, an input data sample is associated with a label. When the input data sample is provided to the model, the model generates output data, which is compared to the label associated with the input data sample to generate an error value. Parameters of the model are modified in an attempt to reduce (e.g., optimize) the error value. As another example of training a model, during unsupervised training of an autoencoder, a data sample is provided as input to the autoencoder, and the autoencoder reduces the dimensionality of the data sample (which is a lossy operation) and attempts to reconstruct the data sample as output data. In this example, the output data is compared to the input data sample to generate a reconstruction loss, and parameters of the autoencoder are modified in an attempt to reduce (e.g., optimize) the reconstruction loss.

As another example, to use supervised training to train a model to perform a classification task, each data element of a training data set may be labeled to indicate a category or categories to which the data element belongs. In this example, during the creation/training phase, data elements are input to the model being trained, and the model generates output indicating categories to which the model assigns the data elements. The category labels associated with the data elements are compared to the categories assigned by the model. The computer modifies the model until the model accurately and reliably (e.g., within some specified criteria) assigns the correct labels to the data elements. In this example, the model can subsequently be used (in a runtime phase) to receive unknown (e.g., unlabeled) data elements, and assign labels to the unknown data elements. In an unsupervised training scenario, the labels may be omitted. During the creation/training phase, model parameters may be tuned by the training algorithm in use such that during the runtime phase, the model is configured to determine which of multiple unlabeled “clusters” an input data sample is most likely to belong to.

As another example, to train a model to perform a regression task, during the creation/training phase, one or more data elements of the training data are input to the model being trained, and the model generates output indicating a predicted value of one or more other data elements of the training data. The predicted values of the training data are compared to corresponding actual values of the training data, and the computer modifies the model until the model accurately and reliably (e.g., within some specified criteria) predicts values of the training data. In this example, the model can subsequently be used (in a runtime phase) to receive data elements and predict values that have not been received. To illustrate, the model can analyze time series data, in which case, the model can predict one or more future values of the time series based on one or more prior values of the time series.

In some aspects, the output of a model can be subjected to further analysis operations to generate a desired result. To illustrate, in response to particular input data, a classification model (e.g., a model trained to perform classification tasks) may generate output including an array of classification scores, such as one score per classification category that the model is trained to assign. Each score is indicative of a likelihood (based on the model's analysis) that the particular input data should be assigned to the respective category. In this illustrative example, the output of the model may be subjected to a softmax operation to convert the output to a probability distribution indicating, for each category label, a probability that the input data should be assigned the corresponding label. In some implementations, the probability distribution may be further processed to generate a one-hot encoded array. In other examples, other operations that retain one or more category labels and a likelihood value associated with each of the one or more category labels can be used.

FIG. 1A is a diagram illustrating a first example of a system 100 that uses filter data 114 based on deep image prior operations for artifact reduction in accordance with some aspects of the present disclosure. The system 100 includes an image processor 104 configured to generate image data, such as third image data 122, based on waveform return data 102.

In particular implementations, the waveform return data 102 are gathered prior to processing by the system 100. For example, the waveform return data 102 may be gathered by a seismic testing system and may represent seismic testing data samples from a particular area observed by the seismic testing system (referred to herein as an “observed area”) In other examples, the waveform return data 102 represent data gathered by other types of sensing systems, such as sonar, lidar, radar, or other active sensing systems (e.g., sensing systems that emit a waveform and sense waveform returns).

The waveform return data 102 may also include other data, such as geometry data indicating positions of one or more source devices and one or more receiver devices when each waveform return was generated. For example, the waveform return data 102 may represent seismic sampling events (also referred to as “shots”), where each of the waveform returns corresponds to a particular arrangement of source and receiver devices relative to the observed area. In this example, the waveform return data 102 represent multiple observational geometries, where each observational geometry corresponds to one or more source locations and one or more receiver locations relative to the observed area. In this example, the image data corresponds to or represents a reflectivity image based on at least a subset of the waveform return data 102.

In the example illustrated in FIG. 1A, the image processor 104 includes a preprocessor 106, a deep-image prior (DIP) engine 110, a filter 116, and an artifact reduction engine 120. The preprocessor 106 is configured to generate (or otherwise obtain) first image data 108 that is based on a portion of the waveform return data 102. In some implementations, the preprocessor 106 may also perform one or more additional operations, such as selecting particular portions of the waveform return data 102 for processing, augmenting the waveform return data 102, or performing other data manipulations or transformations (e.g., coordinate transformations) to prepare selected waveform return data for processing by other portions of the image processor 104. In some implementations, the preprocessor 106 performs operations to increase the salience of artifacts in the first image data 108. For example, the preprocessor 106 may perform data transformation to emphasize parabolic shapes in the first image data 108 since migration swing artifacts show up as parabolic shapes in image data. As another example, the preprocessor 106 may perform data transformation to de-emphasize non-parabolic shapes (e.g., straight lines) in the first image data 108.

The first image data 108 is descriptive of an estimated solution to an inverse problem associated with the waveform return data 102. For example, the preprocessor 106 may be configured to use reverse time migration to generate the first image data 108. The first image data 108 includes artifacts due, at least in part, to the quantity of waveform return data used to generate the first image data 108. To illustrate, the first image data 108 may include migration swing artifacts due to the use of a subset of the waveform return data 102 that represents few sampling events (e.g., a small number of shots) to generate the first image data 108. As one example, the first image data 108 may be based on 5 to 10 shots in the context of seismic imaging.

The DIP engine 110 is configured to perform a plurality of DIP operations using the first image data 108 as an image prior to generate the filter data 114. In this context, DIP operations refer to optimizing a model (e.g., DIP model 112) to decrease differences between an output image (e.g., the filter data 114) of the DIP model 112 and an image prior (e.g., the first image data 108 in FIG. 1A). In general, the DIP model 112 is initialized for the DIP operations using arbitrary model parameters (e.g., link weights, convolution kernel parameters, or other parameters), where the arbitrary model parameters are not dependent on the content of the image prior. For example, the model parameters may be initialized using a randomized process. As another example, the model parameters may be preconfigured independently of the image prior. To illustrate, the same set of initial model parameters may be used for many different image priors. In the context of DIP operations, an “image prior” refers to an image that is used as a reference against which model output is compared during optimization of the DIP model 112.

In a particular implementation, the DIP operations are performed iteratively until a termination condition is satisfied. For example, the termination condition may include a count of iterations. As another example, the termination condition may be satisfied when the output of the DIP model 112 satisfies an error metric. In other examples, other termination conditions or combinations of termination conditions may be used. The termination condition is selected such that the DIP operations terminate before the output of the DIP model 112 fully matches the image prior, which is referred to as early stopping. DIP operations tend to cause the DIP model 112 to reproduce particular features of the image prior before other features. In particular, weaker features (e.g., high frequency and/or low intensity features) are generally reproduced later than stronger features (e.g., low frequency and/or high intensity features). Data transform operations may be used to further emphasize this tendency. For example, the preprocessor 106 may subject the waveform return data 102 to a Radon Transform or other image transformation process while generating the first image data 108 in order to emphasize artifacts in the first image data.

During a particular iteration of the DIP operations, the DIP engine 110 provides input data to the DIP model 112. The DIP model 112 may include any generative machine-learning model architecture, such as a deep convolutional neural network, and the input data is an arbitrary input image, such as a Gaussian noise image. The DIP model 112 generates output data based on the input data and the parameters of the DIP model 112. The DIP engine 110 compares the output data to the image prior (e.g., the first image data 108 in FIG. 1A) to generate one or more error metric values (also referred to herein as “error values”). The DIP engine 110 adjusts one or more of the values of the parameters of the DIP model 112 to reduce an error value associated with a subsequent iteration. For example, the DIP engine 110 may use a gradient descent approach to reduce the error value(s) from iteration to iteration.

When the termination condition is satisfied, the DIP engine 110 generates output representing the filter data 114. In a particular implementation, the filter data 114 includes or corresponds to the output data generated by the DIP model 112 during the final iteration of the DIP operations. In other implementations, the termination condition may be set so that the final iteration of the DIP operations occurs later than a desired early stopping point. For example, the termination condition may be set based on a specific error value that is likely to be reached after the DIP model 112 has been optimized to reproduce some aspects of the image prior that are not associated with artifacts (e.g., high frequency and/or low intensity features, such as image features representing subsurface layer boundaries). In such implementations, the filter data 114 may include or correspond to output of the DIP model 112 from a prior iteration. For example, the model output from a fiftieth iteration may be used based on the termination condition being satisfied by the one hundredth iteration. In this example, the model output from fifty iterations prior to satisfaction of the termination condition is used merely as one possibility. In other examples, a different number of iterations may be used to select the model output used as the filter data 114 after the termination condition is satisfied. In still other examples, criteria other than a specified count of iterations may be used to select the model output used as the filter data 114 after the termination condition is satisfied. To illustrate, the model output from several iterations may be compared to one another to select model output that is most suitable for use as the filter data 114.

Due to early stopping of the DIP operations, the filter data 114 is expected to represent artifacts of the image prior (e.g., the first image data 108) and omit or represent with less intensity more detailed features of the image prior. For example, in the context of seismic imaging, the filter data 114 may include data representing migration swing artifacts and omit data representing subsurface layers and other geological features. In some implementations, the filter data 114 may include data representing subsurface layers and other geological features; however, the data representing the migration swing artifacts may be more strongly represented than the data representing subsurface layers and other geological features.

In the example illustrated in FIG. 1A, the filter 116 uses one or more data manipulation operations to modify the first image data 108 based on the filter data 114 to generate filtered image data (e.g., second image data 118A in FIG. 1A). For example, in a particular implementation, the filter 116 subtracts the filter data 114 from the first image data 108. In this example, the subtraction may be performed pixel-by-pixel or region-by-region, where a region represents two or more adjacent pixels.

In the example illustrated in FIG. 1A, the filtered image data (e.g., the second image data 118A) is provided as input to an artifact reduction engine 120. The artifact reduction engine 120 is configured to perform an artifact reduction process based on the second image data 118A to generate the third image data 122. In a particular implementation, the artifact reduction engine 120 includes one or more machine-learning-based, generative models that are configured to generate image data (e.g., the third image data 122) using the second image data 118A as an initial estimate. As one non-limiting example, the artifact reduction engine 120 includes a score matching network that is trained to perform a gradient descent artifact reduction process, as described further with reference to FIGS. 5-10C. In other examples, the artifact reduction engine 120 includes one or more other generative machine-learning based models, such as a generative adversarial network (GAN) or a deep belief network.

Since the artifacts present in the first image data 108 have been removed or attenuated to form the second image data 118A, using the second image data 118A to initialize the artifact reduction engine 120 enables the artifact reduction engine 120 to converge more quickly to the third image data 122, improves the quality of the third image data 122, enables the artifact reduction engine 120 to converge to the third image data 122 using less waveform return data, or all three.

FIG. 1B is a diagram illustrating a second example of the system 100. The system 100 of FIG. 1B, like the system 100 of FIG. 1A, uses filter data 114 based on deep image prior operations for artifact reduction. The system 100 of FIG. 1B includes the preprocessor 106, the artifact reduction engine 120, the DIP engine 110, and the filter 116, each of which operates as described above with reference to FIG. 1A; however, the order of operations is different in FIG. 1B than in FIG. 1A.

In particular, in FIG. 1B, the preprocessor 106 generates the first image data 108 based on a selected subset of the waveform return data 102 and provides the first image data 108 to the artifact reduction engine 120. The artifact reduction engine 120 uses the first image data 108 as an initial estimate of a solution to an inverse problem and generates second image data 118B as output.

In the example illustrated in FIG. 1B, the second image data 118B is provided to the DIP engine 110, and the DIP engine 110 uses the second image data 118B as an image prior. In some implementations, a second preprocessor (not shown) modifies the second image data 118B output by the artifact reduction engine 120, and the modified version of the second image data 118B is provided to the DIP engine 110 for use as the image prior. For example, the second image data 118B may be modified to accentuate any artifacts that remain in the second image data 118B despite the artifact reduction operations performed by the artifact reduction engine 120.

The DIP engine 110 performs DIP operations (as described above), using the second image data 118B as an image prior, to generate the filter data 114. The filter 116 modifies the second image data 118B based on the filter data 114 to remove or reduce artifacts in the second image data 118B. The filter 116 generates the third image data 122, which the image processor 104 outputs for subsequent use (e.g., for display to a user).

The image processors 104 of FIGS. 1A or 1B are able to generate similar quality images to those generated by traditional techniques (e.g., reverse-time migration) in a manner that is more efficient in terms of computer resources (e.g., processor time, power, and memory) required and in terms of the amount of waveform return data 102 needed.

FIG. 2 is a diagram illustrating examples of renderings of various image data generated by the system 100 of FIG. 1A or the system 100 of FIG. 1B in comparison to a high-quality image 208 generated using a traditional technique. In the particular example, illustrated in FIG. 2, the images are seismic images based on waveform return data from a plurality of sampling events (referred to as “shots” in this context). The high-quality image 208 in the example of FIG. 2 was generated using reverse time migration based on 243 shots. An image 202 of FIG. 2 shows an example of filter data (e.g., the filter data 114) generated using DIP operations and an image prior based on reverse time migration of 8 shots.

An image 204 of FIG. 2 shows an example of output of an artifact reduction process without filtering to remove the filter data. In the example illustrated in FIG. 2, the artifact reduction process described with reference to FIGS. 5-9 was used to generate the image 204 and the artifact reduction process was initialized based on reverse time migration of 8 shots.

An image 206 of FIG. 2 shows an example of output of an artifact reduction process that was initialized using filtered image data. For example, the image 206 corresponds to an example of the third image data 122 of FIG. 1A, which is generated by the artifact reduction engine 120 based on the second image data 118A. Other than being initialized using filtered image data, the artifact reduction process used to generate the image 206 was the same as the artifact reduction process used to generate the image 204.

Comparison of the image 206 and the image 208 shows that the image 206 has similar levels of residual artifacts; however, many fewer shots were used to generate the image 206. Reverse time migration is a very computationally demanding process and increases in computational demand as the number of shots used increases. Thus, significant computing resources (e.g., processor time, memory, etc.) are saved by generating the image 206 rather than the image 208.

FIG. 3 illustrates a flow chart of a first example of a method 300 of using filter data based on deep image prior operations for artifact reduction in accordance with some aspects of the present disclosure. One or more operations described with reference to FIG. 3 may be performed by the system 100 of FIG. 1A, such as by the image processor 104 executing instructions corresponding to the preprocessor 106, the DIP engine 110, the filter 116, the artifact reduction engine 120, or a combination thereof.

The method 300 includes, at 302, obtaining (e.g., by one or more processors) first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data. For example, the preprocessor 106 of FIG. 1A generates the first image data 108 using reverse time migration or a generative machine learning process. In general, the first image data 108 is an approximation and as such includes artifacts. For example, the artifacts may be due, at least in part, to use of an insufficient data set to generate the first image data 108. In the same or different examples, the artifacts could be reduced by using a larger set of waveform return data 102, by using a different grid size, but using a more complex or more accurate model, or a combination thereof.

As one specific example, the first image data may be determined using a physics-based model. To illustrate, the first image data may be obtained by performing reverse time migration based on at least a subset of the waveform return data 102. To illustrate, the waveform return data may include or correspond to waveform returns indicative of reflections, from a visually occluded structure, of one or more incident waves, and the image data represents an image of the structure.

The method 300 includes, at 304, performing (e.g., by the one or more processors) a plurality of deep image prior operations, using an image prior based on the first image data, to generate filter data. For example, the DIP engine 110 of FIG. 1A performs DIP operations using an image prior that is based on the first image data 108. The first image data 108 may be used directly as the image prior, or the first image data 108 may be modified prior to use as the image prior. As one example, the first image data 108 may be modified to accentuate artifacts in the first image data 108 and the modified first image data may be used as the image prior. The DIP operations include iteratively modifying parameters of the DIP model 112 based on an error value, where the error value is determined based on a comparison of output of the DIP model 112 and the image prior (e.g. the first image data 108). The DIP operations are terminated (e.g.

using early stopping) such that the output of the DIP model 112 (e.g., the filter data 114) includes at least some of the artifacts of the first image data 108 but omits high frequency details of the first image data 108. For example, the first image data may represent an image and the filter data may represent a partial reconstruction of the image generated by early stopping the deep image prior operations such that the plurality of deep image prior operations includes fewer deep image prior operations than would be used to fully reconstruct the image.

Performing the deep image prior operations includes performing a plurality of iterations of machine-learning optimization. A particular iteration of the machine-learning optimization includes, for example, providing input data to a model to determine first output data based on values of parameters of the model, determining an error value based on a comparison of the first output data and the image prior, and adjusting one or more of the values of the parameters of the model to reduce an error value associated with a subsequent iteration of the machine-learning optimization. In some implementations, the values of the parameters of the model are set independently of the first image data for an initial iteration of the machine-learning optimization. For example, the initial values of the parameters of the model may be set using a randomized process or using predetermined values. In the same or different implementations, the input data for the initial iteration of the machine-learning optimization is independent of the first image data. For example, the input data may include or correspond to randomized image data, such as a Gaussian noise image.

The method 300 includes, at 306, modifying (e.g., by the one or more processors) the first image data based on the filter data to generate second image data. For example, the filter 116 of FIG. 1A modifies the first image data 108 based on the filter data 114 to generate the second image data 118A. To illustrate, the filter 116 may subtract the filter data 114 from the first image data 108 to generate the second image data 118A.

The method 300 includes, at 308, performing (e.g., by the one or more processors) an artifact reduction process based on the second image data to generate third image data. For example, the artifact reduction engine 120 may perform the artifact reduction process, such as the artifact reduction processes described with reference to FIGS. 5-10, to generate the third image data 122. To illustrate, when the waveform return data represent reflections, from a visually occluded structure, of one or more incident waves, the first, second, and third image data 108, 118A, and 122 may represent reflectivity images based on at least a subset of the waveform return data. In this illustrative example, the first image data 108 includes migration swing artifacts (and possibly other artifacts), and the migration swing artifacts are reduced in the third image data 122 relative to the first image data 108.

In some implementations, the artifact reduction process includes a plurality of iterations. In some such implementations, a particular iteration of the artifact reduction process includes, for example, using a machine-learning model (e.g., a score matching network) to determine a gradient associated with particular solution data and adjusting the particular solution data based on the gradient to generate updated solution data. In some implementations, the artifact reduction process may perform one or more further operations after the third image data is generated. For example, after the third image data is generated, the artifact reduction process may include providing the third image data as input to a physics-based model to generate fourth image data. To illustrate, the third image data may be used as a starting point for one or more iterations of reverse time migration. In some implementations, the method 300 may further include using the fourth image data to initiate a second plurality of iterations of the artifact reduction process to generate fifth image data. In such implementations, the artifacts are reduced in the fifth image data relative to the fourth image data.

The method 300 enables generation of similar quality images to those generated by traditional techniques (e.g., reverse-time migration) in a manner that is more efficient in terms of computer resources (e.g., processor time, power, and memory) required and in terms the quantity of waveform return data needed.

FIG. 4 illustrates a flow chart of a second example of a method 400 of using filter data based on deep image prior operations for artifact reduction in accordance with some aspects of the present disclosure. One or more operations described with reference to FIG. 4 may be performed by the system 100 of FIG. 1B, such as by the image processor 104 executing instructions corresponding to the preprocessor 106, the DIP engine 110, the filter 116, the artifact reduction engine 120, or a combination thereof.

The method 400 includes, at 402, obtaining (e.g., by one or more processors) first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data. For example, the preprocessor 106 of FIG. 1B generates the first image data 108 using reverse time migration or a generative machine learning process. In general, the first image data 108 is an approximation and as such includes artifacts. For example, the artifacts may be due, at least in part, to use of an insufficient data set to generate the first image data 108. In the same or different examples, the artifacts could be reduced by using a larger set of waveform return data 102, by using a different grid size, but using a more complex or more accurate model, or a combination thereof.

As one specific example, the first image data may be determined using a physics-based model. To illustrate, the first image data may be obtained by performing reverse time migration based on at least a subset of the waveform return data. To illustrate, the waveform return data may represent reflections, from a visually occluded structure, of one or more incident waves, and the image data represents an image of the structure.

The method 400 includes, at 404, performing (e.g., by the one or more processors) an artifact reduction process based on the first image data to generate second image data. For example, the artifact reduction engine 120 may perform the artifact reduction process, such as the artifact reduction processes described with reference to FIGS. 5-10, to generate the second image data 118B. To illustrate, when the waveform return data represent reflections, from a visually occluded structure, of one or more incident waves, the first, second, and third image data 108, 118B, and 122 may represent reflectivity images based on at least a subset of the waveform return data. In this illustrative example, the first image data 108 includes migration swing artifacts (and possibly other artifacts), and the migration swing artifacts are reduced in the second image data 118B.

In some implementations, the artifact reduction process includes a plurality of iterations. In some such implementations, a particular iteration of the artifact reduction process includes, for example, using a machine-learning model (e.g., a score matching network) to determine a gradient associated with particular solution data and adjusting the particular solution data based on the gradient to generate updated solution data.

The method 400 includes, at 406, performing (e.g., by the one or more processors) a plurality of deep image prior operations, using an image prior that is based on the second image data, to generate filter data. For example, the DIP engine 110 of FIG. 1B may perform DIP operations using the second image data 118B as an image prior. In another example, the second image data 118B may be modified to accentuate artifacts in the second image data 118B, and the DIP engine 110 of FIG. 1B may use the modified second image data 118B as an image prior. In either of the preceding examples, the DIP operations include iteratively modifying parameters of the DIP model 112 based on an error value, where the error value is determined based on a comparison of output of the DIP model 112 and the image prior. The DIP operations are terminated (e.g. using early stopping) such that the output of the DIP model 112 (e.g., the filter data 114) includes at least some of the artifacts of the second image data 118B but omits or certain other details of the second image data 118B. For example, the second image data may represent an image and the filter data may represent a partial reconstruction of the image generated by early stopping the deep image prior operations such that the plurality of deep image prior operations includes fewer deep image prior operations than would be used to fully reconstruct the image.

Performing the deep image prior operations includes performing a plurality of iterations of machine-learning optimization. A particular iteration of the machine-learning optimization includes, for example, providing input data to a model to determine first output data based on values of parameters of the model, determining an error value based on a comparison of the first output data and the image prior, and adjusting one or more of the values of the parameters of the model to reduce an error value associated with a subsequent iteration of the machine-learning optimization. In some implementations, the values of the parameters of the model are set independently of the second image data for an initial iteration of the machine-learning optimization. For example, the initial values of the parameters of the model may be set using a randomization process or using predetermined values. In the same or different implementations, the input data for the initial iteration of the machine-learning optimization is independent of the second image data. For example, the input data may include or correspond to randomized image data, such as a Gaussian noise image.

The method 400 includes, at 408, modifying (e.g., by the one or more processors) the second image data based on the filter data to generate third image data. For example, the filter 116 of FIG. 1B modifies the second image data 118B based on the filter data 114 to generate the third image data 122. To illustrate, the filter 116 may subtract the filter data 114 from the second image data 118B to generate the third image data 122.

The method 400 enables generation of similar quality images to those generated by traditional techniques (e.g., reverse-time migration) in a manner that is more efficient in terms of computer resources (e.g., processor time, power, and memory) required and in terms the amount of waveform return data needed.

FIG. 5 illustrates an example of a computer system 500 configured to use machine-learning to generate image data and to determine reliability of portions of the image data in accordance with some aspects of the present disclosure. For example, the computer system 500 is configured to initiate, perform, or control one or more of the operations described with reference to FIGS. 1A-4. The computer system 500 can be implemented as or incorporated into one or more of various other devices, such as a personal computer (PC), a tablet PC, a server computer, a personal digital assistant (PDA), a laptop computer, a desktop computer, a communications device, a wireless telephone, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single computer system 500 is illustrated, the term “system” includes any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions. While FIG. 5 illustrates one example of the computer system 500, other computer systems or computing architectures and configurations may be used for carrying out the operations disclosed herein.

The computer system 500 includes one or more processors 502. Each processor of the one or more processors 502 can include a single processing core or multiple processing cores that operate sequentially, in parallel, or sequentially at times and in parallel at other times. Each processor of the one or more processors 502 includes circuitry defining a plurality of logic circuits 504, working memory 506 (e.g., registers and cache memory), communication circuits, etc., which together enable the processor(s) 502 to control the operations performed by the computer system 500 and enable the processor(s) 502 to generate a useful result based on analysis of particular data and execution of specific instructions.

The processor(s) 502 are configured to interact with other components or subsystems of the computer system 500 via a bus 570. The bus 570 is illustrative of any interconnection scheme serving to link the subsystems of the computer system 500, external subsystems or devices, or any combination thereof. The bus 570 includes a plurality of conductors to facilitate communication of electrical and/or electromagnetic signals between the components or subsystems of the computer system 500. Additionally, the bus 570 includes one or more bus controllers or other circuits (e.g., transmitters and receivers) that manage signaling via the plurality of conductors and that cause signals sent via the plurality of conductors to conform to particular communication protocols.

In FIG. 5, the computer system 500 includes one or more output devices 530, one or more input devices 510, and one or more interface devices 520. Each of the output device(s) 530, the input device(s) 510, and the interface device(s) 520 can be coupled to the bus 570 via a port or connector, such as a Universal Serial Bus port, a digital visual interface (DVI) port, a serial ATA (SATA) port, a small computer system interface (SCSI) port, a high-definition media interface (HDMI) port, or another serial or parallel port. In some implementations, one or more of the output device(s) 530, the input device(s) 510, and/or the interface device(s) 520 is coupled to or integrated within a housing with the processor(s) 502 and the memory device(s) 540, in which case the connections to the bus 570 can be internal, such as via an expansion slot or other card-to-card connector. In other implementations, the processor(s) 502 and the memory device(s) 540 are integrated within a housing that includes one or more external ports, and one or more of the output device(s) 530, the input device(s) 510, and/or the interface device(s) 520 is coupled to the bus 570 via the external port(s).

Examples of the output device(s) 530 include display devices, speakers, printers, televisions, projectors, or other devices to provide output of data (e.g., solution data representing a solution to an inverse problem) in a manner that is perceptible by a user. Examples of the input device(s) 510 include buttons, switches, knobs, a keyboard 512, a pointing device 514, a biometric device, a microphone, a motion sensor, or another device to detect user input actions. The pointing device 514 includes, for example, one or more of a mouse, a stylus, a track ball, a pen, a touch pad, a touch screen, a tablet, another device that is useful for interacting with a graphical user interface, or any combination thereof. A particular device may be an input device 510 and an output device 530. For example, the particular device may be a touch screen.

The interface device(s) 520 are configured to enable the computer system 500 to communicate with one or more other devices 524 directly or via one or more networks 522. For example, the interface device(s) 520 may encode data in electrical and/or electromagnetic signals that are transmitted to the other device(s) 524 as control signals or packet-based communication using pre-defined communication protocols. As another example, the interface device(s) 520 may receive and decode electrical and/or electromagnetic signals that are transmitted by the other device(s) 524. To illustrate, the other device(s) 524 may include sensor(s) that generate the waveform return data 102. The electrical and/or electromagnetic signals can be transmitted wirelessly (e.g., via propagation through free space), via one or more wires, cables, optical fibers, or via a combination of wired and wireless transmission. The waveform return data 102 can include or correspond to data descriptive of seismic returns, electromagnetic returns, or other waveform returns.

The computer system 500 also includes the one or more memory devices 540. The memory device(s) 540 include any suitable computer-readable storage device depending on, for example, whether data access needs to be bi-directional or unidirectional, speed of data access required, memory capacity required, other factors related to data access, or any combination thereof. Generally, the memory device(s) 540 includes some combinations of volatile memory devices and non-volatile memory devices, though in some implementations, only one or the other may be present. Examples of volatile memory devices and circuits include registers, caches, latches, many types of random-access memory (RAM), such as dynamic random-access memory (DRAM), etc. Examples of non-volatile memory devices and circuits include hard disks, optical disks, flash memory, and certain types of RAM, such as resistive random-access memory (ReRAM). Other examples of both volatile and non-volatile memory devices can be used as well, or in the alternative, so long as such memory devices store information in a physical, tangible medium. Thus, the memory device(s) 540 include circuits and structures and are not merely signals or other transitory phenomena (i.e., are non-transitory media).

In the example illustrated in FIG. 5, the memory device(s) 540 store instructions 544 that are executable by the processor(s) 502 to perform various operations and functions. The instructions 544 include instructions to enable the various components and subsystems of the computer system 500 to operate, interact with one another, and interact with a user, such as a basic input/output system (BIOS) 546 and an operating system (OS) 548. Additionally, the instructions 544 include one or more applications 550, scripts, or other program code to enable the processor(s) 502 to perform the operations described herein. For example, in FIG. 5, the instructions 544 include instructions corresponding to the preprocessor 106, the DIP engine 110, the filter 116, the artifact reduction engine 120, and a graphical user interface (GUI) engine 552, which are executable by the processor(s) 502 to initiate, control, or perform one or more of the operations described with reference to FIGS. 1A-4. In the example illustrated in FIG. 5, the image processor 104 includes the instructions corresponding to the preprocessor 106, the DIP engine 110, the DIP model 112, the filter 116, the artifact reduction engine 120, and the GUI engine 552. In other examples, the GUI engine 552 is omitted or is distinct from the image processor 104.

The GUI engine 552 is configured to render image data generated by the image processor 104 (e.g., the third image data 122 of FIG. 1A or 1B) as an image 534 (e.g., a reflectivity image) of a graphical user interface (GUI) 532 for display via one of the output device(s) 530.

In some implementations, the artifact reduction engine 120, when executed by the processor(s) 502, causes the processor(s) 502 to initiate, perform, or control an iterative gradient descent artifact reduction process. The iterative gradient descent artifact reduction process uses the second image data 118A of FIG. 1A or the first image data 108 of FIG. 1B as an initial estimate (e.g., x0 in the pseudocode below).

In a particular implementation, an iteration of the gradient descent artifact reduction process includes determining, using a machine-learning model, a gradient associated with particular solution data (e.g., the initial estimate or solution data generated by a prior iteration of the gradient descent artifact reduction process). The iteration also includes adjusting the particular solution data based on the gradient to generate updated solution data.

As one example, the artifact reduction engine 120 may perform operations as described by the following pseudocode. In the following pseudocode, a “shot” refers to a single sampling event, measurements from which may be used to generate a corresponding set of waveform return data. In other contexts (e.g., other than seismic imaging) shots may be replaced with other sampling events (e.g., pings in the context of sonar).

x0 ← RTM(k) for n iterations:  for m artifact levels:   αm = ε*λmmin   for t=1,..T Langevin steps:    xt ← xt−1 + αm S(xt−1, λm) + noise  Optional or selective, for p shots:   Perform physics-based modeling to generate revised xt

In the pseudocode above, RTM(k) represents an aggregation of k reverse time migration results (e.g., a combined image based on k images, each based on waveform return data for a respective shot), where k is an integer greater than one. For example. RTM(8) refers to solution data based on reverse time migration of 8 waveform return measurements of the waveform return data 102. The solution data generated by RTM(k) is used as an initial estimate (x0) of a solution to the inverse problem. Additionally, in the pseudocode above, n is a configurable counter having a value greater than or equal to one. Further, in the pseudocode above, m is a counter indicating a number of artifact levels over which operations are performed, where m is an integer greater than or equal to one and less than or equal to a count of the total number of shots represented in the waveform return data 102 or that are otherwise available. In this context, an “artifact level” refers to any metric that characterizes and distinguishes the number, distribution, and/or intensity of migration swing artifacts in a set of images. Generally, m is set within a range from a smallest number of shots (referred to as kmin) that can be used to generate acceptable solution data to a largest number of shots (referred to as kmax) that the image generator 118 is allowed to use. In a particular implementation, kmin is associated with a largest artifact level, λmax (e.g., strongest artifacts in the solution data) used to train or optimize the artifact reduction engine 120, and kmax is associated with a smallest artifact level, λmin (e.g., weakest artifacts in the solution data) used to train or optimize the artifact reduction engine 120.

Additionally, in the pseudocode above, αm is a step size parameter used by the Langevin operations in the inner loop and is annealed (e.g., iteratively decreased) based on a ratio of the current artifact level, λm, to the smallest artifact level, λmin, used to train or optimize the artifact reduction engine 120 adjusted by a configurable parameter ε.

Further, in the pseudocode above, an inner loop performs T iterations to modify the solution data x to decrease artifacts present in the solution data, where T is an integer greater than or equal to two. Generally, good results have been achieved with values of T on the order of 100 to 200. In each inner loop iteration, solution data xt is determined by adjusting prior solution data xt-1 based on a gradient αmS(xt-1m), and where S(xt-1, λm) is an output of the artifact reduction engine 120 based on the prior solution data xt-1 and the current artifact level, λm.

In some implementations, after T iterations of the inner loop to generate solution data xt, the solution data may be adapted to satisfy particular constraints. In some implementations, the solution data is adapted to conform the solution data to specified expectations, such as physical constraints of the observed system. Such adaptation of the solution data is optional. To illustrate, such adaptation can be omitted entirely in some implementations. In some implementations, adaptation of the solution data is selectively performed. To illustrate, particular solution data may be adapted to conform to specified constraints based on characteristics of the solution data or based on output of one or more the operations performed by the pseudocode.

In some implementations, the pseudocode above also includes one or more iterations of a physics-based model. For example, within the n iterations loop and after the T Langevin steps loop, the pseudocode may include performing one or more least mean square reverse time migration (LSRTM) iterations of p shots, where p is an integer greater than one. In some implementations, p is set equal to k. In some implementations, physics-based modelling after the T Langevin steps loop is optional.

To illustrate, physics-based modelling can be omitted from the gradient descent artifact reduction process in some implementations. In some implementations, physics-based modelling after the T Langevin steps loop is selectively performed. To illustrate, physics-based modelling may be performed based on characteristics of the solution data generated by the T Langevin steps loop or based on output of another operation performed by the pseudocode.

One benefit of using a gradient descent artifact reduction process based on the pseudocode above is that high-quality solutions (e.g., solutions with weaker or fewer artifacts) can be generated using fewer computing resources than would be used to generate similar high-quality solutions using reverse time migration alone. When filtered image data (e.g., the second image data 118A of FIG. 1A) is used as the initial estimate, x0, the gradient descent artifact reduction process is able to generate such high-quality images using fewer iteration or is able to generate even higher-quality images. Alternatively, fewer iterations of the gradient descent artifact reduction process can be used to generate a lower quality image (e.g., the second image data 118B), which can subsequently be modified using the filter data 114 to further reduce artifacts in the output.

FIGS. 9A-9C and 10A-10C show examples of results of the gradient descent artifact reduction process performed by the artifact reduction engine 120 based on simulated seismic sensing. FIGS. 9A and 10A show images that were each generated using only reverse time migration based on 243 shots. FIGS. 9B and 10B show images that were each generated using only reverse time migration based on 8 shots. Note that in FIG. 9B, significant visual artifacts are present in regions 902. Significant visual artifacts are also present in FIG. 10B in regions 1002. FIGS. 9C and 10C show images that were each generated using reverse time migration based on 8 shots and a gradient descent artifact reduction process based on the pseudocode above. To generate FIGS. 9C and 10C, n was set to 1, m was set to 1 and T was set to 200, R was zeroed out, and no LSRTM iterations were performed after the T Langevin steps loop.

Comparison of FIG. 9B with FIG. 9C shows that the gradient descent artifact reduction process significantly reduced the number and/or visual strength of the artifacts present in FIG. 9C as compared to the artifacts present in FIG. 9B. Likewise, comparison of FIG. 10B with FIG. 10C shows that the gradient descent artifact reduction process significantly reduced the number and/or visual strength of the artifacts present in FIG. 10C as compared to the artifacts present in FIG. 10B. For many purposes, FIGS. 9C and 10C may be useful substitutes for FIGS. 9A and 10A; however, generation of FIGS. 9C and 10C used significantly fewer computing resources (e.g., power, processor cycles, memory) than generation of FIGS. 9A and 10A. Further, significant time and expense can be saved by generating only the 8 shots used for FIGS. 9C and 10C rather than the 243 shots as used for FIGS. 9A and 10A.

Returning to FIG. 5, in a particular implementation, the artifact reduction engine 120 includes or corresponds to a score-matching network. The score-matching network may be trained, for example, by obtaining multiple sets of solution data based on waveform return measurements (e.g., the waveform return data 102). In this example, each set of solution data corresponds to a physics-based solution to the inverse problem, and each set of solution data is associated with a respective artifact level. In general, the artifact level of a set of solution data can be reduced by using waveform return data from a larger number of sampling events (e.g., more shots). Multiple sets of training data are generated based on the multiple sets of solution data. For example, each set of training data is based on one or more sets of solution data associated with a respective artifact level. To illustrate, a set of training data associated with a particular artifact level is determined based on differences between a set of solution data associated with a lowest artifact level and a set of solution data with the particular artifact level.

The score matching network is trained using the multiple sets of training data. For example, the score matching network may be trained by adjusting parameters of the score matching network to decrease a value of an objective function. In this example, the value of the objective function represents a weighted sum of values of objective functions for multiple different artifact levels. To illustrate, the objective function for a particular artifact level λ, may be represented by:

( θ ; k ) = 1 2 𝔼 p data ( x ) 𝔼 RTM ( k ) [ s θ ( x ~ , λ ) + x RTM ( K ) - x RTM ( k ) λ k 2 2 2 ]

In this example, the objective function used to train or optimize the score matching network may be represented by:

( θ ; { k } i = 1 L ) = 1 L i = 1 L γ ( λ k i 2 ) ( θ , λ i )

In the objective function for a particular artifact level λ, pdata(x) refers to a probability density function of a data set x and pdata(x) represents an expectation over pdata(x) and RTM(k) represents an expectation over the set of images that are produced (e.g., an expectation over each allowed k value and over each set of specific images from the set of available image that are used to generate the k shots). Sθ({tilde over (x)},λ) represents solution data generated by the artifact reduction engine 120 for a particular artifact level λ. Further, xRTM(K) represents solution data generated by a physics-based model (e.g., RTM) based on K shots, where K is a count of shots of the largest set of shots used for any solution in the training data, and XRTM(k) represents solution data generated by the physics-based model (e.g., RTM) based on k shots, where k is an integer greater than one and less than K.

x RTM ( K ) - x RTM ( k ) λ k 2

represents a value of an error metric that is based on a difference between solution data associated with the particular artifact level (corresponding to using k shots) and solution data associated with a lowest artifact level (corresponding to using K shots) of the multiple sets of solution data used to generate the training data.

In the objective function used to train or optimize the score matching network, L is the total count of artifact levels used, and γ is a function of a fitting parameter that is based on an error metric associated with the particular artifact level. In some implementations, γ is equal to λ2.

In a particular implementation, values of λ for a particular count of shots can be determined by determining multiple RTM(k) solutions for the same value of k. For example, from among a large set of waveform return data 102, multiple subsets of k shots can be selected. To illustrate, in the example of seismic sampling, different source and/or receiver positions can be selected for different subsets of k shots. The RTM(k) values for a particular value of k can be compared to RTM(K) to determine a mean square error for the artifact level associated with k. The value of λ for the particular count of shots k can be determined by fitting a parameterized function to the mean square error values with respect to a normalized count of shots (e.g., kmin/k).

FIG. 8 is a diagram illustrating particular aspects of determining parameters of a gradient descent artifact reduction system. In particular, FIG. 8 illustrates a plot of data points with kmin/k on the x-axis and mean square error (MSE) of RTM(K)−RTM(k) on the y-axis. In FIG. 8, lines 802, 804, 808, and 810 represent data for a particular slice (e.g., a two-dimensional visualization) of an observed system. For example, a line 802 connects maximum MSE(RTM(K)−RTM(k)) values for the slice, a line 810 connects minimum MSE(RTM(K)−RTM(k)) values for the slice, a line 804 connects values of a mean of a distribution of values of the MSE(RTM(K)−RTM(k)) for the slice, and a line 808 represents a curve fit to values connected by the line 804. A line 806 represents a curve fit based on averaging across multiple slices. The line 806 represents values of k2.

In an alternative embodiment, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement one or more of the operations described herein. Accordingly, the present disclosure encompasses software, firmware, and hardware implementations.

FIG. 6 is a flow chart of an example of a method of reducing artifacts in solution data (e.g., image data) associated with an inverse problem. One or more operations described with reference to FIG. 6 may be performed by the computer system 500 of FIG. 5, such as by the processor(s) 502 executing the instructions 544 (e.g., instructions corresponding to the artifact reduction engine 120).

The method 600 includes, at 602, determining, based on waveform return data, a first estimated solution to an inverse problem associated with the waveform return data. The first estimated solution includes artifacts due, at least in part, to the quantity of waveform return data used to determine the first solution data.

As a particular example, the waveform return data may include waveform return measurements associated with one or more seismic imaging shots. In this particular example, the first estimated solution may be determined by performing one or more iterations of reverse time migration based on the waveform return measurements associated with the one or more seismic imaging shots, and the artifacts correspond to migration swing artifacts. To illustrate, in this example, the first estimated solution may correspond to the first image data 108 of FIG. 1B.

In some implementations, the first solution data is determined by generating filter data using DIP operations and subtracting the filter data from prior estimated solution data. To illustrate, in such implementations, the first estimated solution corresponds to the second image data 118A of FIG. 1A.

The method 600 also includes, at 604, performing a plurality of iterations of a gradient descent artifact reduction process to generate second solution data. The artifacts are reduced in the second solution data relative to the first solution data. A particular iteration of the gradient descent artifact reduction process includes, at 606, determining, using a machine-learning model (e.g., a score-matching network of the artifact reduction engine 120), a gradient associated with particular solution data, and at 608, adjusting the particular solution data based on the gradient to generate updated solution data.

In some implementations, the method 600 also includes, after determining the second solution data, providing the second solution data as input to the physics-based model to generate third solution data. For example, a result generated by the Langevin steps of the pseudocode above may be subjected to one or more LSRTM iterations to further refine the solution. In some such implementations, the method 600 may further include, performing a second plurality of iterations of the gradient descent artifact reduction process to generate fourth solution data, where the artifacts are reduced in the fourth solution data relative to the third solution data.

In some implementations, during the particular iteration of the gradient descent artifact reduction process, the particular solution data is adjusted further based on a step size parameter (e.g., αm). Further, the step size parameter may be adjusted after one or more iterations. For example, the gradient descent artifact reduction process may include, after performing the plurality of iterations, adjusting the step size parameter and performing a second plurality of iterations using the adjusted step size parameter. In a particular aspect, the step size parameter is based on a ratio of an error metric associated with a count of waveform measurement events (e.g., shots) and an error metric associated with a specified minimum count of waveform measurement events.

In some implementations, the particular solution data generated via one or more iterations is adjusted further based on one or more specified constraints. For example, particular solution data may be modified to enforce expected features of the solution data (e.g., an image) based on prior knowledge or assumptions about the observed system. As another example, particular solution data may be modified to enforce physics-based or experience-based expectations, such as an arrangement of features in the solution data.

One benefit of the method 600 is that it facilitates generation of high-quality solutions (e.g., solutions with weaker or fewer artifacts) using fewer computing resources than would be used to generate similar high-quality solutions using RTM alone. Further, the method 600 uses less waveform return data (e.g., fewer seismic imaging shots) than would be used to generate similar high-quality solutions using RTM alone. As a result, time and resources expended to gather the waveform return data can be reduced.

FIG. 7 is a flow chart of an example of a method of training a machine-learning model of a gradient descent artifact reduction system. One or more operations described with reference to FIG. 7 may be performed by the computer system 500 of FIG. 5, such as by the processor(s) 502 executing the instructions 544.

The method 700 includes, at 702, obtaining a first batch of solution data. Each set of solution data of the first batch corresponds to a physics-based solution to an inverse problem and is associated with a respective artifact level. For example, the sets of solution data may be determined using reverse time migration (RTM). As one specific example, RTM is performed for each set of waveform return data (e.g., each shot in a seismic imaging context) of a plurality of sets of waveform return data that are available to be processed to generate RTM data.

In this example, a first set of solution data may include RTM data based on k1 sets of waveform return data, where k1 is an integer that is greater than one and indicates a quantity of waveform return data included in the set. For example, the k1 sets of waveform return data may include waveform return data associated with k1 sampling events (e.g., k1 shots in a seismic imaging context), and the k1 sets of waveform return data do not include all of the waveform return data that are available for processing. Further, in this example, a second set of solution data may include RTM data based on k2 sets of waveform return data, where k2 is an integer that is greater than k1 and indicates a quantity of waveform return data included in the second set. The k2 sets of waveform return data also do not include all of the waveform return data that are available for processing. Similarly, other sets of solution data may include RTM data for sets of waveform return data. In some implementations, a randomized process is used to select the specific sets of waveform return data (from all of the waveform return data that are available for processing) used to determine a particular set of solution data. In the same or different implementations, a randomized process is used to select a k value used to determine a quantity of waveform return data used for each set of solution data.

The method 700 includes, at 704, generating training data based on the first batch. The training data associated with a particular artifact level is determined based on differences between a set of solution data associated with a lowest artifact level and a set of solution data associated with the particular artifact level.

The method 700 includes, at 706, training a score matching network using the training data. For example, training the score matching network may include adjusting parameters of the score matching network to decrease a value of an objective function, where the value of the objective function represents a weighted sum of values of objective functions for multiple different artifact levels. In this example, a value of an objective function for a particular artifact level of the multiple different artifact levels is weighted based on a fitting parameter, and the fitting parameter is based on an error metric associated with the particular artifact level. To illustrate, a value of the error metric may be determined based on a difference between solution data associated with the particular artifact level and solution data associated with a lowest artifact level of the multiple sets of solution data.

In some implementations, the score matching network may be further trained based on one or more additional batches of training data. For example, the method 700 may include obtaining one or more second batches of solution data corresponding to physics-based solutions to the inverse problem, generating additional training data based on the one or more second batches, and training the score matching network using the additional training data.

Particular aspects of the disclosure are highlighted in the following Examples:

Example 1 includes a method including: obtaining, by one or more processors, first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data; performing, by the one or more processors, a plurality of deep image prior operations, using an image prior based on the first image data, to generate filter data; modifying, by the one or more processors, the first image data based on the filter data to generate second image data; and performing, by the one or more processors, an artifact reduction process based on the second image data to generate third image data.

Example 2 includes the method of Example 1, wherein the waveform return data represent reflections, from a visually occluded structure, of one or more incident waves, and wherein the third image data represents an enhanced image of the structure as compared to the first image data.

Example 3 includes the method of Example 1 or the method of Example 2, wherein obtaining the first image data includes determining the first image data using a physics-based model.

Example 4 includes the method of any of Examples 1 to 3, wherein obtaining the first image data includes performing reverse time migration based on at least a subset of the waveform return data.

Example 5 includes the method of any of Examples 1 to 4, wherein the estimated solution to the inverse problem includes a reflectivity image.

Example 6 includes the method of any of Examples 1 to 5, wherein the first image data represents an image and the filter data represents a partial reconstruction of the image generated by early stopping the deep image prior operations such that the plurality of deep image prior operations includes fewer deep image prior operations than would be used to fully reconstruct the image.

Example 7 includes the method of any of Examples 1 to 6, wherein determining the filter data based on the first image data includes performing a plurality of iterations of machine-learning optimization, wherein a particular iteration of the machine-learning optimization includes: providing input data to a model to determine first output data based on values of parameters of the model; determining an error value based on a comparison of the first output data and the image prior; and adjusting one or more of the values of the parameters of the model to reduce an error value associated with a subsequent iteration of the machine-learning optimization.

Example 8 includes the method of Example 7, wherein an initial iteration of the machine-learning optimization includes setting the values of the parameters of the model independently of the first image data.

Example 9 includes the method of Example 7 or the method of Example 10, wherein the input data for an initial iteration of the machine-learning optimization is independent of the first image data.

Example 10 includes the method of any of Examples 1 to 9, further including, before modifying the first image data based on the filter data, performing one or more data transform operations to emphasize artifacts in the filter data.

Example 11 includes the method of any of Examples 1 to 10, wherein modifying the first image data based on the filter data includes subtracting the filter data from the first image data.

Example 12 includes the method of any of Examples 1 to 11, further including, after determining the third image data, providing the third image data as input to a physics-based model to generate fourth image data.

Example 13 includes the method of Example 12, further including performing a second plurality of iterations of the artifact reduction process to generate fifth image data, wherein artifacts are reduced in the fifth image data relative to the fourth image data.

Example 14 includes the method of any of Examples 1 to 13, wherein the artifact reduction process includes a plurality of iterations and a particular iteration of the artifact reduction process includes: determining, using a machine-learning model, a gradient associated with particular solution data; and adjusting the particular solution data based on the gradient to generate updated solution data.

Example 15 includes the method of Example 14, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted further based on a step size parameter.

Example 16 includes the method of Example 15, further including, after performing the plurality of iterations of the artifact reduction process: adjusting the step size parameter; and performing a second plurality of iterations of the artifact reduction process.

Example 17 includes the method of any of Examples 14 to 16, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted to satisfy a specified constraint.

Example 18 includes the method of any of Examples 14 to 17, wherein the machine-learning model corresponds to a score-matching network.

Example 19 includes a system including: one or more processors configured to: obtain first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data; perform a plurality of deep image prior operations, using an image prior based on the first image data, to generate filter data; modify the first image data based on the filter data to generate second image data; and perform an artifact reduction process based on the second image data to generate third image data, wherein artifacts are reduced in the third image data relative to the first image data.

Example 20 includes the system of Example 19, wherein the waveform return data represent reflections, from a visually occluded structure, of one or more incident waves, and wherein the third image data represents an enhanced image of the structure as compared to the first image data.

Example 21 includes the system of Example 19 or the system of Example 20, wherein obtaining the first image data includes determining the first image data using a physics-based model.

Example 22 includes the system of any of Examples 19 to 21, wherein obtaining the first image data includes performing reverse time migration based on at least a subset of the waveform return data.

Example 23 includes the system of any of Examples 19 to 22, wherein the estimated solution to the inverse problem includes a reflectivity image.

Example 24 includes the system of any of Examples 19 to 23, wherein the first image data represents an image and the filter data represents a partial reconstruction of the image.

Example 25 includes the system of any of Examples 19 to 24, wherein determining the filter data based on the first image data includes performing a plurality of iterations of machine-learning optimization, wherein a particular iteration of the machine-learning optimization includes: providing input data to a model to determine first output data based on values of parameters of the model; determining an error value based on a comparison of the first output data and the image prior; and adjusting one or more of the values of the parameters of the model to reduce an error value associated with a subsequent iteration of the machine-learning optimization.

Example 26 includes the system of Example 25, wherein an initial iteration of the machine-learning optimization includes setting the values of the parameters of the model independently of the first image data.

Example 27 includes the system of Example 25 or the system of Example 30, wherein the input data for an initial iteration of the machine-learning optimization is independent of the first image data.

Example 28 includes the system of any of Examples 25 to 27, wherein the one or more processors are further configured to, before modifying the first image data based on the filter data, perform one or more data transform operations to emphasize artifacts in the filter data.

Example 29 includes the system of any of Examples 19 to 28, wherein modifying the first image data based on the filter data includes subtracting the filter data from the first image data.

Example 30 includes the system of any of Examples 19 to 29, wherein the one or more processors are further configured to, after determining the third image data, provide the third image data as input to a physics-based model to generate fourth image data.

Example 31 includes the system of Example 30, wherein the one or more processors are further configured to perform a second plurality of iterations of the artifact reduction process to generate fifth image data, wherein artifacts are reduced in the fifth image data relative to the fourth image data.

Example 32 includes the system of any of Examples 19 to 31, wherein the artifact reduction process includes a plurality of iterations and a particular iteration of the artifact reduction process includes: determining, using a machine-learning model, a gradient associated with particular solution data; and adjusting the particular solution data based on the gradient to generate updated solution data.

Example 33 includes the system of Example 32, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted further based on a step size parameter.

Example 34 includes the system of Example 33, wherein the one or more processors are further configured to, after performing the plurality of iterations of the artifact reduction process: adjust the step size parameter; and perform a second plurality of iterations of the artifact reduction process.

Example 35 includes the system of any of Examples 32 to 34 wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted to satisfy a specified constraint.

Example 36 includes the system of any of Examples 32 to 35, wherein the machine-learning model corresponds to a score-matching network.

Example 37 includes a computer-readable storage device storing instructions that, when executed by one or more processors, cause the one or more processors to: obtain first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data; perform a plurality of deep image prior operations, using an image prior based on the first image data, to generate filter data; modify the first image data based on the filter data to generate second image data; and perform an artifact reduction process based on the second image data to generate third image data, wherein artifacts are reduced in the third image data relative to the first image data.

Example 38 includes the computer-readable storage device of Example 37, wherein the waveform return data represent reflections, from a visually occluded structure, of one or more incident waves, and wherein the third image data represents an enhanced image of the structure as compared to the first image data.

Example 39 includes the computer-readable storage device of Example 37 or the computer-readable storage device of Example 38, wherein obtaining the first image data includes determining the first image data using a physics-based model.

Example 40 includes the computer-readable storage device of any of Examples 37 to 39, wherein obtaining the first image data includes performing reverse time migration based on at least a subset of the waveform return data.

Example 41 includes the computer-readable storage device of any of Examples 37 to 40, wherein the estimated solution to the inverse problem includes a reflectivity image.

Example 42 includes the computer-readable storage device of any of Examples 37 to 41, wherein the first image data represents an image and the filter data represents a partial reconstruction of the image.

Example 43 includes the computer-readable storage device of any of Examples 37 to 42, wherein determining the filter data based on the first image data includes performing a plurality of iterations of machine-learning optimization, wherein a particular iteration of the machine-learning optimization includes: providing input data to a model to determine first output data based on values of parameters of the model; determining an error value based on a comparison of the first output data and the image prior; and adjusting one or more of the values of the parameters of the model to reduce an error value associated with a subsequent iteration of the machine-learning optimization.

Example 44 includes the computer-readable storage device of Example 43, wherein an initial iteration of the machine-learning optimization includes setting the values of the parameters of the model independently of the first image data.

Example 45 includes the computer-readable storage device of Example 43 or the computer-readable storage device of Example 44, wherein the input data for an initial iteration of the machine-learning optimization is independent of the first image data.

Example 46 includes the computer-readable storage device of any of Examples 43 to 45, wherein the instructions further cause the one or more processors to, before modifying the first image data based on the filter data, perform one or more data transform operations to emphasize artifacts in the filter data.

Example 47 includes the computer-readable storage device of any of Examples 37 to 46, wherein modifying the first image data based on the filter data includes subtracting the filter data from the first image data.

Example 48 includes the computer-readable storage device of any of Examples 37 to 47, wherein the instructions further cause the one or more processors to, after determining the third image data, provide the third image data as input to a physics-based model to generate fourth image data.

Example 49 includes the computer-readable storage device of Example 48, wherein the instructions further cause the one or more processors to perform a second plurality of iterations of the artifact reduction process to generate fifth image data, wherein artifacts are reduced in the fifth image data relative to the fourth image data.

Example 50 includes the computer-readable storage device of any of Examples 37 to 49, wherein the artifact reduction process includes a plurality of iterations and a particular iteration of the artifact reduction process includes: determining, using a machine-learning model, a gradient associated with particular solution data; and adjusting the particular solution data based on the gradient to generate updated solution data.

Example 51 includes the computer-readable storage device of Example 50, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted further based on a step size parameter.

Example 52 includes the computer-readable storage device of Example 51, wherein the instructions further cause the one or more processors to, after performing the plurality of iterations of the artifact reduction process: adjust the step size parameter; and perform a second plurality of iterations of the artifact reduction process.

Example 53 includes the computer-readable storage device of any of Examples 50 to 52, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted to satisfy a specified constraint.

Example 54 includes the computer-readable storage device of any of Examples 50 to 53, wherein the machine-learning model corresponds to a score-matching network.

Example 55 includes a method including: obtaining, by one or more processors, first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data; performing, by the one or more processors, an artifact reduction process based on the first image data to generate second image data; performing, by the one or more processors, a plurality of deep image prior operations, using an image prior based on the first image data or the second image data, to generate filter data; and modifying, by the one or more processors, the second image data based on the filter data to generate third image data.

Example 56 includes the method of Example 55, wherein the waveform return data represent reflections, from a visually occluded structure, of one or more incident waves, and wherein the third image data represents an enhanced image of the structure as compared to the first image data.

Example 57 includes the method of Example 55 or the method of Example 56, wherein obtaining the first image data includes determining the first image data using a physics-based model.

Example 58 includes the method of any of Examples 55 to 57, wherein obtaining the first image data includes performing reverse time migration based on at least a subset of the waveform return data.

Example 59 includes the method of any of Examples 55 to 58, wherein the estimated solution to the inverse problem includes a reflectivity image.

Example 60 includes the method of any of Examples 55 to 59, wherein the first image data represents an image and the filter data represents a partial reconstruction of the image generated by early stopping the deep image prior operations such that the plurality of deep image prior operations includes fewer deep image prior operations than would be used to fully reconstruct the image.

Example 61 includes the method of any of Examples 55 to 60, wherein determining the filter data based on the first image data includes performing a plurality of iterations of machine-learning optimization, wherein a particular iteration of the machine-learning optimization includes: providing input data to a model to determine first output data based on values of parameters of the model; determining an error value based on a comparison of the first output data and the image prior; and adjusting one or more of the values of the parameters of the model to reduce an error value associated with a subsequent iteration of the machine-learning optimization.

Example 62 includes the method of Example 61, wherein an initial iteration of the machine-learning optimization includes setting the values of the parameters of the model independently of the first image data.

Example 63 includes the method of Example 61 or the method of Example 62, wherein the input data for an initial iteration of the machine-learning optimization is independent of the first image data.

Example 64 includes the method of any of Examples 55 to 63, further including, before modifying the first image data based on the filter data, performing one or more data transform operations to emphasize artifacts in the filter data.

Example 65 includes the method of any of Examples 55 to 64, wherein modifying the first image data based on the filter data includes subtracting the filter data from the first image data.

Example 66 includes the method of any of Examples 55 to 65, further including, after determining the third image data, providing the third image data as input to a physics-based model to generate fourth image data.

Example 67 includes the method of Example 66, further including performing a second plurality of iterations of the artifact reduction process to generate fifth image data, wherein artifacts are reduced in the fifth image data relative to the fourth image data.

Example 68 includes the method of any of Examples 55 to 67, wherein the artifact reduction process includes a plurality of iterations and a particular iteration of the artifact reduction process includes: determining, using a machine-learning model, a gradient associated with particular solution data; and adjusting the particular solution data based on the gradient to generate updated solution data.

Example 69 includes the method of Example 68, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted further based on a step size parameter.

Example 70 includes the method of Example 69, further including, after performing the plurality of iterations of the artifact reduction process: adjusting the step size parameter; and performing a second plurality of iterations of the artifact reduction process.

Example 71 includes the method of any of Examples 68 to 70, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted to satisfy a specified constraint.

Example 72 includes the method of any of Examples 68 to 71, wherein the machine-learning model corresponds to a score-matching network.

Example 73 includes a system including: one or more processors configured to: obtain first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data; perform an artifact reduction process based on the first image data to generate second image data; perform a plurality of deep image prior operations, using an image prior based on the first image data or the second image data, to generate filter data; and modify the second image data based on the filter data to generate third image data.

Example 74 includes the system of Example 73, wherein the waveform return data represent reflections, from a visually occluded structure, of one or more incident waves, and wherein the third image data represents an enhanced image of the structure as compared to the first image data.

Example 75 includes the system of Example 73 or the system of Example 74, wherein obtaining the first image data includes determining the first image data using a physics-based model.

Example 76 includes the system of any of Examples 73 to 75, wherein obtaining the first image data includes performing reverse time migration based on at least a subset of the waveform return data.

Example 77 includes the system of any of Examples 73 to 76, wherein the estimated solution to the inverse problem includes a reflectivity image.

Example 78 includes the system of any of Examples 73 to 77, wherein the first image data represents an image and the filter data represents a partial reconstruction of the image.

Example 79 includes the system of any of Examples 73 to 78, wherein determining the filter data based on the first image data includes performing a plurality of iterations of machine-learning optimization, wherein a particular iteration of the machine-learning optimization includes: providing input data to a model to determine first output data based on values of parameters of the model; determining an error value based on a comparison of the first output data and the image prior; and adjusting one or more of the values of the parameters of the model to reduce an error value associated with a subsequent iteration of the machine-learning optimization.

Example 80 includes the system of Example 79, wherein an initial iteration of the machine-learning optimization includes setting the values of the parameters of the model independently of the first image data.

Example 81 includes the system of Example 79 or the system of Example 80, wherein the input data for an initial iteration of the machine-learning optimization is independent of the first image data.

Example 82 includes the system of any of Examples 79 to 81, wherein the one or more processors are further configured to, before modifying the first image data based on the filter data, perform one or more data transform operations to emphasize artifacts in the filter data.

Example 83 includes the system of any of Examples 73 to 82, wherein modifying the first image data based on the filter data includes subtracting the filter data from the first image data.

Example 84 includes the system of any of Examples 73 to 83, wherein the one or more processors are further configured to, after determining the third image data, provide the third image data as input to a physics-based model to generate fourth image data.

Example 85 includes the system of Example 84, wherein the one or more processors are further configured to perform a second plurality of iterations of the artifact reduction process to generate fifth image data, wherein artifacts are reduced in the fifth image data relative to the fourth image data.

Example 86 includes the system of any of Examples 73 to 85, wherein the artifact reduction process includes a plurality of iterations and a particular iteration of the artifact reduction process includes: determining, using a machine-learning model, a gradient associated with particular solution data; and adjusting the particular solution data based on the gradient to generate updated solution data.

Example 87 includes the system of Example 86, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted further based on a step size parameter.

Example 88 includes the system of Example 87, wherein the one or more processors are further configured to, after performing the plurality of iterations of the artifact reduction process: adjust the step size parameter; and perform a second plurality of iterations of the artifact reduction process.

Example 89 includes the system of any of Examples 86 to 88, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted to satisfy a specified constraint.

Example 90 includes the system of any of Examples 86 to 89, wherein the machine-learning model corresponds to a score-matching network.

Example 91 includes a computer-readable storage device storing instructions that, when executed by one or more processors, cause the one or more processors to: obtain first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data; perform an artifact reduction process based on the first image data to generate second image data; perform a plurality of deep image prior operations, using an image prior based on the first image data or the second image data, to generate filter data; and modify the second image data based on the filter data to generate third image data.

Example 92 includes the computer-readable storage device of Example 91, wherein the waveform return data represent reflections, from a visually occluded structure, of one or more incident waves, and wherein the third image data represents an enhanced image of the structure as compared to the first image data.

Example 93 includes the computer-readable storage device of Example 91 or the computer-readable storage device of Example 92, wherein obtaining the first image data includes determining the first image data using a physics-based model.

Example 94 includes the computer-readable storage device of any of Examples 91 to 93, wherein obtaining the first image data includes performing reverse time migration based on at least a subset of the waveform return data.

Example 95 includes the computer-readable storage device of any of Examples 91 to 94, wherein the estimated solution to the inverse problem includes a reflectivity image.

Example 96 includes the computer-readable storage device of any of Examples 91 to 95, wherein the first image data represents an image and the filter data represents a partial reconstruction of the image.

Example 97 includes the computer-readable storage device of any of Examples 91 to 96, wherein determining the filter data based on the first image data includes performing a plurality of iterations of machine-learning optimization, wherein a particular iteration of the machine-learning optimization includes: providing input data to a model to determine first output data based on values of parameters of the model; determining an error value based on a comparison of the first output data and the image prior; and adjusting one or more of the values of the parameters of the model to reduce an error value associated with a subsequent iteration of the machine-learning optimization.

Example 98 includes the computer-readable storage device of Example 97, wherein an initial iteration of the machine-learning optimization includes setting the values of the parameters of the model independently of the first image data.

Example 99 includes the computer-readable storage device of Example 97 or the computer-readable storage device of Example 98, wherein the input data for an initial iteration of the machine-learning optimization is independent of the first image data.

Example 100 includes the computer-readable storage device of any of Examples 97 to 99, wherein the instructions further cause the one or more processors to, before modifying the first image data based on the filter data, perform one or more data transform operations to emphasize artifacts in the filter data.

Example 101 includes the computer-readable storage device of any of Examples 91 to 100, wherein modifying the first image data based on the filter data includes subtracting the filter data from the first image data.

Example 102 includes the computer-readable storage device of any of Examples 91 to 101, wherein the instructions further cause the one or more processors to, after determining the third image data, provide the third image data as input to a physics-based model to generate fourth image data.

Example 103 includes the computer-readable storage device of Example 102, wherein the instructions further cause the one or more processors to perform a second plurality of iterations of the artifact reduction process to generate fifth image data, wherein artifacts are reduced in the fifth image data relative to the fourth image data.

Example 104 includes the computer-readable storage device of any of Examples 91 to 103, wherein the artifact reduction process includes a plurality of iterations and a particular iteration of the artifact reduction process includes: determining, using a machine-learning model, a gradient associated with particular solution data; and adjusting the particular solution data based on the gradient to generate updated solution data.

Example 105 includes the computer-readable storage device of Example 104, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted further based on a step size parameter.

Example 106 includes the computer-readable storage device of Example 105, wherein the instructions further cause the one or more processors to, after performing the plurality of iterations of the artifact reduction process: adjust the step size parameter; and perform a second plurality of iterations of the artifact reduction process.

Example 107 includes the computer-readable storage device of any of Examples 104 to 106, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted to satisfy a specified constraint.

Example 108 includes the computer-readable storage device of any of Examples 104 to 107, wherein the machine-learning model corresponds to a score-matching network.

The systems and methods illustrated herein may be described in terms of functional block components, screen shots, optional selections, and various processing steps. It should be appreciated that such functional blocks may be realized by any number of hardware and/or software components configured to perform the specified functions. For example, the system may employ various integrated circuit components, e.g., memory elements, processing elements, logic elements, look-up tables, and the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, the software elements of the system may be implemented with any programming or scripting language such as C, C++, C#, Java, JavaScript, VBScript, Macromedia Cold Fusion, COBOL, Microsoft Active Server Pages, assembly, PERL, PHP, AWK, Python, Visual Basic, SQL Stored Procedures, PL/SQL, any UNIX shell script, and extensible markup language (XML) with the various algorithms being implemented with any combination of data structures, objects, processes, routines or other programming elements. Further, it should be noted that the system may employ any number of techniques for data transmission, signaling, data processing, network control, and the like.

The systems and methods of the present disclosure may be embodied as a customization of an existing system, an add-on product, a processing apparatus executing upgraded software, a standalone system, a distributed system, a method, a data processing system, a device for data processing, and/or a computer program product. Accordingly, any portion of the system or a module or a decision model may take the form of a processing apparatus executing code, an internet based (e.g., cloud computing) embodiment, an entirely hardware embodiment, or an embodiment combining aspects of the internet, software, and hardware. Furthermore, the system may take the form of a computer program product on a computer-readable storage medium or device having computer-readable program code (e.g., instructions) embodied or stored in the storage medium or device. Any suitable computer-readable storage medium or device may be utilized, including hard disks, CD-ROM, optical storage devices, magnetic storage devices, and/or other storage media. As used herein, a “computer-readable storage medium” or “computer-readable storage device” is not a signal.

Systems and methods may be described herein with reference to screen shots, block diagrams and flowchart illustrations of methods, apparatuses (e.g., systems), and computer media according to various aspects. It will be understood that each functional block of a block diagram and flowchart illustration, and combinations of functional blocks in block diagrams and flowchart illustrations, respectively, can be implemented by computer program instructions.

Computer program instructions may be loaded onto a computer or other programmable data processing apparatus to produce a machine, such that the instructions that execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory or device that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.

Accordingly, functional blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each functional block of the block diagrams and flowchart illustrations, and combinations of functional blocks in the block diagrams and flowchart illustrations, can be implemented by either special purpose hardware-based computer systems which perform the specified functions or steps, or suitable combinations of special purpose hardware and computer instructions.

Although the disclosure may include one or more methods, it is contemplated that it may be embodied as computer program instructions on a tangible computer-readable medium, such as a magnetic or optical memory or a magnetic or optical disk/disc. All structural, chemical, and functional equivalents to the elements of the above-described exemplary embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Moreover, it is not necessary for a device or method to address each and every problem sought to be solved by the present disclosure, for it to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. As used herein, the terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.

Changes and modifications may be made to the disclosed embodiments without departing from the scope of the present disclosure. These and other changes or modifications are intended to be included within the scope of the present disclosure, as expressed in the following claims.

Claims

1. A method comprising:

obtaining, by one or more processors, first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data;
performing, by the one or more processors, a plurality of deep image prior operations, using an image prior based on the first image, to generate filter data;
modifying, by the one or more processors, the first image data based on the filter data to generate second image data; and
performing, by the one or more processors, an artifact reduction process based on the second image data to generate third image data.

2. The method of claim 1, wherein the waveform return data represent reflections, from a visually occluded structure, of one or more incident waves, and wherein the third image data represents an enhanced image of the structure as compared to the first image data.

3. The method of claim 1, wherein obtaining the first image data includes determining the first image data using a physics-based model.

4. The method of claim 1, wherein obtaining the first image data includes performing reverse time migration based on at least a subset of the waveform return data.

5. The method of claim 1, wherein the estimated solution to the inverse problem comprises a reflectivity image.

6. The method of claim 1, wherein the first image data represents an image and the filter data represents a partial reconstruction of the image generated by early stopping the deep image prior operations such that the plurality of deep image prior operations includes fewer deep image prior operations than would be used to fully reconstruct the image.

7. The method of claim 1, wherein determining the filter data based on the first image data includes performing a plurality of iterations of machine-learning optimization, wherein a particular iteration of the machine-learning optimization includes:

providing input data to a model to determine first output data based on values of parameters of the model;
determining an error value based on a comparison of the first output data and the image prior; and
adjusting one or more of the values of the parameters of the model to reduce an error value associated with a subsequent iteration of the machine-learning optimization.

8. The method of claim 7, wherein an initial iteration of the machine-learning optimization includes setting the values of the parameters of the model independently of the first image data.

9. The method of claim 7, wherein the input data for an initial iteration of the machine-learning optimization is independent of the first image data.

10. The method of claim 1, further comprising, before modifying the first image data based on the filter data, performing one or more data transform operations to emphasize artifacts in the filter data.

11. The method of claim 1, wherein modifying the first image data based on the filter data includes subtracting the filter data from the first image data.

12. The method of claim 1, further comprising, after determining the third image data:

providing the third image data as input to a physics-based model to generate fourth image data; and
performing a second plurality of iterations of the artifact reduction process to generate fifth image data, wherein artifacts are reduced in the fifth image data relative to the fourth image data.

13. The method of claim 1, wherein the artifact reduction process includes a plurality of iterations and a particular iteration of the artifact reduction process includes:

determining, using a machine-learning model, a gradient associated with particular solution data; and
adjusting the particular solution data based on the gradient to generate updated solution data.

14. The method of claim 13, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted further based on a step size parameter.

15. The method of claim 14, further comprising, after performing the plurality of iterations of the artifact reduction process:

adjusting the step size parameter; and
performing a second plurality of iterations of the artifact reduction process.

16. The method of claim 13, wherein, during the particular iteration of the artifact reduction process, the particular solution data is adjusted to satisfy a specified constraint.

17. The method of claim 13, wherein the machine-learning model corresponds to a score-matching network.

18. A system comprising:

one or more processors configured to: obtain first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data; perform a plurality of deep image prior operations, using an image prior based on the first image data, to generate filter data; modify the first image data based on the filter data to generate second image data; and perform an artifact reduction process based on the second image data to generate third image data, wherein artifacts are reduced in the third image data relative to the first image data.

19. The system of claim 18, wherein obtaining the first image data includes performing reverse time migration based on at least a subset of the waveform return data.

20. The system of claim 18, wherein determining the filter data based on the first image data includes performing a plurality of iterations of machine-learning optimization, wherein a particular iteration of the machine-learning optimization includes:

providing input data to a model to determine first output data based on values of parameters of the model;
determining an error value based on a comparison of the first output data and the image prior; and
adjusting one or more of the values of the parameters of the model to reduce an error value associated with a subsequent iteration of the machine-learning optimization.

21. The system of claim 20, wherein an initial iteration of the machine-learning optimization includes setting the values of the parameters of the model independently of the first image data.

22. The system of claim 21, wherein the input data for an initial iteration of the machine-learning optimization is independent of the first image data.

23. The system of claim 18, wherein modifying the first image data based on the filter data includes subtracting the filter data from the first image data.

24. A computer-readable storage device storing instructions that, when executed by one or more processors, cause the one or more processors to:

obtain first image data that is based on waveform return data and is descriptive of an estimated solution to an inverse problem associated with the waveform return data;
perform a plurality of deep image prior operations, using an image prior based on the first image data, to generate filter data;
modify the first image data based on the filter data to generate second image data; and
perform an artifact reduction process based on the second image data to generate third image data, wherein artifacts are reduced in the third image data relative to the first image data.
Patent History
Publication number: 20230111937
Type: Application
Filed: Oct 12, 2022
Publication Date: Apr 13, 2023
Inventors: Alexandru Ardel (Austin, TX), Elad Liebman (Austin, TX), Mrinal Sen (Austin, TX), Georgios Alexandros Dimakis (Austin, TX), Yash Gandhi (Austin, TX), Sriram Ravula (Plano, TX), Dimitri Voytan (Austin, TX)
Application Number: 18/046,043
Classifications
International Classification: G06T 5/00 (20060101); G06T 5/50 (20060101);