METHODS AND SYSTEM FOR SHADING A TWO-DIMENSIONAL ULTRASOUND IMAGE
Various methods and systems are provided for shading a 2D ultrasound image, generated from ultrasound data, using a gradient determined from scalar values of the ultrasound image data. As one example, a method includes correlating image values of a dataset acquired with an ultrasound imaging system to height values; determining a gradient of the height values; applying shading to a 2D image generated from the dataset using the determined gradient; and displaying the shaded 2D image.
The present application is a continuation of U.S. Non-Provisional patent application Ser. No. 15/587,733, entitled “METHODS AND SYSTEM FOR SHADING A TWO-DIMENSIONAL ULTRASOUND IMAGE”, filed on May 5, 2017. The entire contents of the above-listed application are incorporated herein by reference for all purposes.
FIELDEmbodiments of the subject matter disclosed herein relate to applying shading to a two-dimensional ultrasound image.
BACKGROUNDAn ultrasound imaging system may be used to acquire images of a patient's anatomy. Ultrasound imaging systems may acquire a dataset which is then used to generate a 2D image that a medical professional may view and use to diagnose a patient. However, the dataset may include 2D scalar data (e.g., intensity values, power component values, or the like) which results in a flat 2D image that may be more difficult to interpret, thereby increasing a difficulty of diagnosing a patient using the flat 2D image. For example, more complex body structures may be difficult to recognize via a 2D image. As one example, 2D color Doppler images of different body structures may be especially difficult to use for diagnosis.
BRIEF DESCRIPTIONIn one embodiment, a method comprises correlating image values of a dataset acquired with an ultrasound imaging system to height values; determining a gradient of the height values; applying shading to a 2D image generated from the dataset using the determined gradient; and displaying the shaded 2D image.
It should be understood that the brief description above is provided to introduce in simplified form a selection of concepts that are further described in the detailed description. It is not meant to identify key or essential features of the claimed subject matter, the scope of which is defined uniquely by the claims that follow the detailed description. Furthermore, the claimed subject matter is not limited to implementations that solve any disadvantages noted above or in any part of this disclosure.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The present invention will be better understood from reading the following description of non-limiting embodiments, with reference to the attached drawings, wherein below:
The following description relates to various embodiments of shading a 2D ultrasound image using a gradient determined from height values correlated to image values of an ultrasound imaging dataset (which may be 1D, 2D, or 3D) used to generate the 2D ultrasound image. An ultrasound system, such as the system shown in
Turning now to
The ultrasound imaging system 100 also includes a processor 116 to control the transmit beamformer 101, the transmitter 102, the receiver 108, and the receive beamformer 110. The processor 116 is in electronic communication (e.g., communicatively connected) with the probe 106. For purposes of this disclosure, the term “electronic communication” may be defined to include both wired and wireless communications. The processor 116 may control the probe 106 to acquire data. The processor 116 controls which of the elements 104 are active and the shape of a beam emitted from the probe 106. The processor 116 is also in electronic communication with the display device 118, and the processor 116 may process the data (e.g., ultrasound data) into images for display on the display device 118. The processor 116 may include a central processor (CPU), according to an embodiment. According to other embodiments, the processor 116 may include other electronic components capable of carrying out processing functions, such as a digital signal processor, a field-programmable gate array (FPGA), or a graphic board. According to other embodiments, the processor 116 may include multiple electronic components capable of carrying out processing functions. For example, the processor 116 may include two or more electronic components selected from a list of electronic components including: a central processor, a digital signal processor, a field-programmable gate array, and a graphic board. According to another embodiment, the processor 116 may also include a complex demodulator (not shown) that demodulates the RF data and generates raw data. In another embodiment, the demodulation can be carried out earlier in the processing chain. The processor 116 is adapted to perform one or more processing operations according to a plurality of selectable ultrasound modalities on the data. In one example, the data may be processed in real-time during a scanning session as the echo signals are received by receiver 108 and transmitted to processor 116. For the purposes of this disclosure, the term “real-time” is defined to include a procedure that is performed without any intentional delay. For example, an embodiment may acquire images at a real-time rate of 7-20 frames/sec. The ultrasound imaging system 100 may acquire 2D data of one or more planes at a significantly faster rate. However, it should be understood that the real-time frame-rate may be dependent on the length of time that it takes to acquire each frame of data for display. Accordingly, when acquiring a relatively large amount of data, the real-time frame-rate may be slower. Thus, some embodiments may have real-time frame-rates that are considerably faster than 20 frames/sec while other embodiments may have real-time frame-rates slower than 7 frames/sec. The data may be stored temporarily in a buffer (not shown) during a scanning session and processed in less than real-time in a live or off-line operation. Some embodiments of the invention may include multiple processors (not shown) to handle the processing tasks that are handled by processor 116 according to the exemplary embodiment described hereinabove. For example, a first processor may be utilized to demodulate and decimate the RF signal while a second processor may be used to further process the data prior to displaying an image. It should be appreciated that other embodiments may use a different arrangement of processors.
The ultrasound imaging system 100 may continuously acquire data at a frame-rate of, for example, 10 Hz to 30 Hz (e.g., 10 to 30 frames per second). Images generated from the data may be refreshed at a similar frame-rate on display device 118. Other embodiments may acquire and display data at different rates. For example, some embodiments may acquire data at a frame-rate of less than 10 Hz or greater than 30 Hz depending on the size of the frame and the intended application. A memory 120 is included for storing processed frames of acquired data. In an exemplary embodiment, the memory 120 is of sufficient capacity to store at least several seconds worth of frames of ultrasound data. The frames of data are stored in a manner to facilitate retrieval thereof according to its order or time of acquisition. The memory 120 may comprise any known data storage medium.
In various embodiments of the present invention, data may be processed in different mode-related modules by the processor 116 (e.g., B-mode, Color Doppler, M-mode, Color M-mode, spectral Doppler, Elastography, TVI, strain, strain rate, and the like) to form 2D or 3D data. For example, one or more modules may generate B-mode, color Doppler, M-mode, color M-mode, spectral Doppler, Elastography, TVI, strain, strain rate, and combinations thereof, and the like. As one example, the one or more modules may process color Doppler data, which may include traditional color flow Doppler, power Doppler, HD flow, and the like. The image lines and/or frames are stored in memory and may include timing information indicating a time at which the image lines and/or frames were stored in memory. The modules may include, for example, a scan conversion module to perform scan conversion operations to convert the acquired images from beam space coordinates to display space coordinates. A video processor module may be provided that reads the acquired images from a memory and displays an image in real time while a procedure (e.g., ultrasound imaging) is being performed on a patient. The video processor module may include a separate image memory, and the ultrasound images may be written to the image memory in order to be read and displayed by display device 118.
In various embodiments of the present invention, one or more components of ultrasound imaging system 100 may be included in a portable, handheld ultrasound imaging device. For example, display device 118 and user interface 115 may be integrated into an exterior surface of the handheld ultrasound imaging device, which may further contain processor 116 and memory 120. Probe 106 may comprise a handheld probe in electronic communication with the handheld ultrasound imaging device to collect raw ultrasound data. Transmit beamformer 101, transmitter 102, receiver 108, and receive beamformer 110 may be included in the same or different portions of the ultrasound imaging system 100. For example, transmit beamformer 101, transmitter 102, receiver 108, and receive beamformer 110 may be included in the handheld ultrasound imaging device, the probe, and combinations thereof.
As explained above, in one example, the ultrasound imaging system 100 may be used to acquire an ultrasound imaging dataset (e.g., data). The imaging dataset may include image data, such as intensity data (for B-mode), color or power data (for color Doppler), or the like. For example, an imaging dataset acquired via a color Doppler ultrasound mode (such as color Doppler, power Doppler, Turbulence, Velocity-Power, and Velocity-Turbulence modes) may contain color Doppler data that has a power component. In one example, the imaging dataset may be a 2D dataset (e.g., B-mode data). In another example, the imaging dataset may be a 1D dataset (e.g., motion mode). In yet another example, the imaging dataset may be a 3D/4D dataset (e.g., 3D dataset with an image plane created by slicing through the volume). The processor 116 may generate a 2D ultrasound image from the acquired imaging dataset. For example, the processor 116 may convert the image data of the imaging dataset to brightness (e.g., intensity) values and display an ultrasound image of the acquired dataset as a 2D B-mode image. In another example, the processor 116 may convert the image data of the imaging dataset to color values having a power component and display an ultrasound image of the acquired dataset as a 2D color Doppler image. As explained further below with reference to
As explained further below, each image value, f(x,y), may be correlated to a height value, h(x,y). Specifically, each image value at each pixel 204 may be converted to a corresponding height value using a relationship (e.g., model). In one example, the relationship for converting each image value to a height value may be a linear relationship. In another example, the relationship or model for converting each image value to a height value may be a monotonic function, such as a logarithmic or sigmoid function that is a function of the image value and then outputs a height value. In some embodiments, the height values (h(x,y), h(x+1, y), etc.) may be used to create a height map or relief image that represents a height field of the 2D image. In this way, the image values may be represented as height values. A gradient may then be calculated for each height value, h(x,y), at each pixel 204.
An example equation for computing the gradient, including a surface normal, is shown by Equation 1:
In Equation 1, n(x,y) is the surface normal vector at position (x,y) with unit length, ∇ is the gradient, h(x,y) is the scalar height value function (which represents a scalar valued function, such as b-mode intensity, dopper power, or the like) at position (x,y), and r is a constant defining the roughness of the resulting gradient field. The normal of the height value h(x,y) at the position (x,y) is computed by computing the norm of the gradient at this position. The gradient is defined by the partial derivatives in the direction of x and y. The x component is computed with central differences in the x direction. The y component is computed with central differences in the y direction. The z component is the constant r. In this way, determining a gradient for each height value h(x,y) is based on a difference in height values of adjacent pixels 204 in the 2D image 200 (e.g., h(x−1,y), h(x+1,y), h(x,y−1), and h(x,y+1)).
In computing the norm (length=1) of the gradient, the influences of the roughness constant varies for different gradient lengths. For example, if the position (x,y) is in a homogenous region (e.g., a region of the image with little or no variation between height values of adjacent pixels), the x,y components of the gradient are smaller and r is the dominating factor, thereby resulting in a normal vector pointing approximately in the z direction n=(0,0,1). In another example, if the position (x,y) is in a greatly varying area (e.g., a region of the image with larger variation between height values of adjacent pixels), the x and/or the y component of the gradient is bigger and r has less influence, thereby resulting in the normal pointing slightly upwards in the direction of the change. As one example, by representing the image values as height values, computing the gradient of the 2D image includes computing the surface normal of the height field consisting of the height values.
As shown at 206 in
At 302, the method includes accessing ultrasound data, either previously or currently acquired via an ultrasound probe (such as probe 106 shown in
At 304, the method includes selecting a portion of the 2D image desired for shading (e.g., relief shading). As one example, for a Doppler 2D image, selecting the portion of the 2D image desired for shading may include selecting the color portion (e.g., representing blood flow) of the 2D image. As another example, selecting the portion of the 2D image desired for shading may include selecting an entirety of the 2D image (e.g., such as selecting an entire area of a 2D B-mode image). After selecting the desired portion of the 2D image for shading, the method may optionally continue to 306 to filter the image values of the 2D image. For example, the method at 306 may include having the processor apply a filter to the image values for each pixel in order to smooth the appearance of the 2D image. In some embodiments, filtering of the image values may not be performed and method 300 may instead proceed directly from 304 to 308.
At 308, the method includes interpreting the 2D image data as a relief image (e.g., height map) by correlating the image values of the 2D image to height values. For example, the processor of the ultrasound imaging system may convert the image values of the imaging dataset to height values. As explained above with reference to
At 310, the method includes determining a gradient of the determined height values. As explained above with reference to
After determining the gradient of the height values for each pixel, the method continues to 312 to apply shading to the 2D image using the surface normal of the determined gradient. The processor may calculate shading values for the pixels of the 2D image based on the gradient surface normals. For example, the processor may make a logical determination of the shading value for each pixel based on a shading model stored in memory that is a function of the gradient surface normal. The processor may then apply the determined shading value to the pixel to form a shaded 2D image. As explained above with reference to
In some embodiments, the gradient described above may also be used for edge enhancement and segmentation techniques that may then be applied to the 2D ultrasound image.
At 314, the method includes displaying the shaded 2D image. As one example, displaying the shaded 2D image may include displaying the 2D image via a display device (such as display device 118 shown in
As seen in
Similar to as described above with reference to
In this way, 2D ultrasound images may be shaded using a gradient determined from height values that are correlated to image data (e.g., scalar image values) of an imaging dataset acquired with a medical imaging system. Shading 2D images in this way may produce 2D images that have a 3D appearance and make complex structures more recognizable to a viewer/medical professional. As a result, these shaded 2D images may be easier to use for diagnosis. Additionally, in the case of Doppler ultrasound images, the image data correlated to the height values for determining the gradient may include a power component of color Doppler image data. By utilizing the power component (as compared to using a different image value, such as intensity for B-mode images) for this gradient shading technique, the resulting shaded images may be more crisp and clear. For example, while the gradient shading technique described herein may be applied to other imaging modes, such as B-mode images, the increased number of structures in B-mode images may make the resulting images less clear as compared to Doppler images. Thus, the gradient shading method described here may be advantageous for Doppler images. The technical effect of correlating image values of a dataset acquired with an ultrasound imaging system to height values, determining a gradient of the height values, applying shading to a 2D image generated from the dataset using the determined gradient, and displaying the shaded 2D image is producing a shaded 2D image that has a 3D appearance and is therefore easier to diagnose with.
As one embodiment, a method comprises: correlating image values of a dataset acquired with an ultrasound imaging system to height values; determining a gradient of the height values; applying shading to a 2D image generated from the dataset using the determined gradient; and displaying the shaded 2D image. In a first example of the method, the dataset includes color data. In a second example of the method, the dataset is a color Doppler dataset and the image values include a power component from the color Doppler dataset. In a third example of the method, the dataset is a B-mode dataset and the image values include intensity values from the B-mode dataset. In one example, correlating image values to the height values includes, for each pixel of the 2D image generated from the dataset, converting an image value to a corresponding height value using a relationship between the image value and the height value. Additionally, determining the gradient of the height values may include determining a separate gradient for each height value of the height values, where each height value is associated with a corresponding pixel, and where the separate gradient for each height value is based on a difference between height values of pixels adjacent to the corresponding pixel within the 2D image. Further, each separate gradient for each height value may include a surface normal, and applying shading to the 2D image may include applying shading to the 2D image using the surface normal of each corresponding pixel of the 2D image.
In another example of the method, displaying the shaded 2D image includes displaying the shaded 2D image via a display screen of the ultrasound imaging system. The method may further comprise accessing the dataset from a memory of the ultrasound imaging system. Additionally, the method may include filtering the image values prior to correlating the image values to height values and determining the gradient. In yet another example, applying shading to the 2D image using the determined gradient includes applying shading to the 2D image based on a shading model which is a function of surface normals of the determined gradient, the shading model including one or more of a diffuse specular shading model, a Phong reflection model, a Blinn-Phong shading model, and a specular highlight shading model.
As another embodiment, a method comprises: accessing a dataset acquired with an ultrasound imaging system, the dataset including a power component for each pixel of a 2D image generated from the dataset; interpreting the 2D image as a relief image by converting the power component for each pixel to a height value; determining a gradient of each height value; shading the 2D image using a surface normal of each determined gradient; and displaying the shaded 2D image. In one example of the method, shading the 2D image includes applying a shading model to the 2D image using the surface normal of each determined gradient, where the shading model includes one or more of a diffuse specular shading model, a Phong reflection model, a Blinn-Phong shading model, and a specular highlight shading model. In another example of the method, the dataset includes color data and the ultrasound imaging system is a color Doppler ultrasound imaging system. Additionally, converting the power component for each pixel to a height value may include selecting pixels of the 2D image that include color data and converting the power component for each selected pixel to the height value. The method may further include not shading pixels of the 2D image that do not contain color data. The method may further include filtering the power component of the selected pixels and converting the filtered power component of each selected pixel to the height value.
As yet another embodiment, an ultrasound imaging system comprises: an ultrasound probe; a display device; and a processor communicatively connected to the ultrasound probe and display device and including instructions stored in memory for: accessing a dataset acquired with the ultrasound probe from the memory; generating a 2D image from the dataset, where each pixel of the 2D image includes a power component; converting the power component of each pixel to a height value; determining a gradient surface normal for each height value; shading the 2D image using the gradient surface normal; and displaying the shaded 2D image via the display device. In one example, the ultrasound imaging system is a color Doppler ultrasound imaging system. In another example, determining the gradient surface normal for each height value includes, for a selected pixel, determining the gradient surface normal for a height value of the selected pixel using central differences and height values of pixels adjacent to the selected pixel.
As used herein, an element or step recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural of said elements or steps, unless such exclusion is explicitly stated. Furthermore, references to “one embodiment” of the present invention are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Moreover, unless explicitly stated to the contrary, embodiments “comprising,” “including,” or “having” an element or a plurality of elements having a particular property may include additional such elements not having that property. The terms “including” and “in which” are used as the plain-language equivalents of the respective terms “comprising” and “wherein.” Moreover, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements or a particular positional order on their objects.
This written description uses examples to disclose the invention, including the best mode, and also to enable a person of ordinary skill in the relevant art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those of ordinary skill in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
Claims
1. A method, comprising:
- generating a flat, 2D image from a 2D dataset including scalar image values without height values, the 2D dataset acquired with an ultrasound imaging system;
- determining a gradient of height values from the scalar image values;
- applying shading to the flat, 2D image using the determined gradient; and
- displaying the shaded flat, 2D image, having a 3D appearance, via a display screen of the ultrasound imaging system.
2. The method of claim 1, wherein determining the gradient includes, for each pixel of the flat, 2D image generated from the 2D dataset including the scalar image values, converting a first scalar image value of the scalar image values included in the 2D dataset to a corresponding height value using a relationship between the first scalar image value and the height value.
3. The method of claim 2, wherein the relationship is one of a linear relationship or a monotonic function.
4. The method of claim 2, where the height value represents a height value of the first scalar image value but does not include height data.
5. The method of claim 1, wherein determining the gradient of the height values includes determining a separate gradient for each height value of the height values, wherein each height value is associated with a corresponding pixel, and wherein the separate gradient for each height value is based on a difference between height values of pixels adjacent to the corresponding pixel within the flat, 2D image.
6. The method of claim 5, wherein each separate gradient for each height value is used to compute a surface normal, and wherein applying shading to the flat, 2D image includes applying shading to the flat, 2D image using the surface normal of each corresponding pixel of the flat, 2D image.
7. The method of claim 1, further comprising accessing the 2D dataset from a memory of the ultrasound imaging system.
8. The method of claim 1, further comprising filtering the scalar image values prior to determining the gradient.
9. The method of claim 1, wherein applying shading to the flat, 2D image using the determined gradient includes applying shading to the flat, 2D image based on a shading model which is a function of surface normals of the determined gradient, the shading model including one or more of a diffuse specular shading model, a Phong reflection model, a Blinn-Phong shading model, and a specular highlight shading model.
10. The method of claim 1, wherein the 2D dataset is a B-mode dataset and the scalar image values include intensity values from the B-mode dataset.
11. The method of claim 1, wherein the 2D dataset is a color Doppler dataset and the scalar image values are only a power component from the color Doppler dataset.
12. The method of claim 11, wherein the color Doppler dataset further includes a velocity value for each pixel of the flat, 2D image, each velocity value converted to a color value, and wherein shading the flat, 2D image includes shading the color value of each pixel of the flat, 2D image using the determined gradient.
13. An ultrasound imaging system, comprising:
- an ultrasound probe;
- a display device; and
- a processor communicatively connected to the ultrasound probe and the display device and including instructions stored in memory that, when executed during operation of the ultrasound imaging system, cause the processor to: access a 2D dataset, including only 2D scalar data and no 3D data, acquired with the ultrasound probe from the memory; generate a flat, 2D image from the 2D dataset, where each pixel of the flat, 2D image includes a scalar, image value; convert the scalar, image value of each pixel to a scalar, height value via a scalar valued function; determine a gradient surface normal for each scalar, height value; shade one or more pixels of the flat, 2D image using the gradient surface normal; and display the shaded flat, 2D image via the display device.
14. The ultrasound imaging system of claim 13, wherein determining the gradient surface normal for each scalar, height value includes, for a selected pixel, determining the gradient surface normal for the scalar, height value of the selected pixel using central differences and scalar, height values of pixels adjacent to the selected pixel.
15. The ultrasound imaging system of claim 13, wherein the 2D dataset is a B-mode dataset and the scalar, image value of each pixel is an intensity value from the B-mode dataset.
16. A method, comprising:
- generating a flat, 2D image from a 2D, B-mode dataset including scalar intensity values without height values, the 2D, B-mode dataset acquired with an ultrasound imaging system;
- determining a gradient of height values from the scalar image values by, for each pixel of the flat, 2D image, converting a first scalar intensity value of the scalar intensity values included in the 2D, B-mode dataset to a corresponding height value using a relationship between the first scalar intensity value and the height value;
- applying shading to the flat, 2D image using the determined gradient; and
- displaying the shaded flat, 2D image, having a 3D appearance, via a display screen of the ultrasound imaging system.
17. The method of claim 16, wherein the relationship is one of a linear relationship or a monotonic function.
18. The method of claim 16, where the height value represents a height value of the first scalar intensity value but does not include height data.
19. The method of claim 16, wherein determining the gradient of the height values includes determining a separate gradient for each height value of the height values, wherein each height value is associated with a corresponding pixel, and wherein the separate gradient for each height value is based on a difference between height values of pixels adjacent to the corresponding pixel within the flat, 2D image.
20. The method of claim 19, wherein each separate gradient for each height value is used to compute a surface normal, and wherein applying shading to the flat, 2D image includes applying shading to the flat, 2D image using the surface normal of each corresponding pixel of the flat, 2D image.
Type: Application
Filed: Aug 27, 2019
Publication Date: Dec 19, 2019
Inventors: Gerald Schroecker (Salzburg), Daniel John Buckton (Salzburg)
Application Number: 16/552,823