文獻速遞:生成對抗網(wǎng)絡(luò)醫(yī)學影像中的應(yīng)用—— CG-3DSRGAN:用于從低劑量PET圖像恢復圖像質(zhì)量的分類指導的3D生成對抗網(wǎng)絡(luò)
本周給大家分享文獻的主題是生成對抗網(wǎng)絡(luò)(Generative adversarial networks, GANs)在醫(yī)學影像中的應(yīng)用。文獻的研究內(nèi)容包括同模態(tài)影像生成、跨模態(tài)影像生成、GAN在分類和分割方面的應(yīng)用等。生成對抗網(wǎng)絡(luò)與其他方法相比展示出了優(yōu)越的數(shù)據(jù)生成能力,使它們在醫(yī)學圖像應(yīng)用中廣受歡迎。這些特性引起了醫(yī)學成像領(lǐng)域研究人員的濃厚興趣,導致這些技術(shù)在各種傳統(tǒng)和新穎應(yīng)用中迅速實施,如圖像重建、分割、檢測、分類和跨模態(tài)合成。
01
文獻速遞介紹
正電子發(fā)射斷層掃描(PET)是一種超靈敏且非侵入性的分子成像技術(shù),已成為腫瘤學 和神經(jīng)學 的主要工具。與磁共振(MR)和計算機斷層掃描(CT)等其他成像方式相比,PET能夠通過向體內(nèi)注射放射性示蹤劑,成像活體組織的功能特性,并檢測器官內(nèi)與疾病相關(guān)的功能活動 。不幸的是,用于PET成像的注射放射性示蹤劑對患者產(chǎn)生的電離輻射劑量限制了其應(yīng)用 。根據(jù)示蹤劑的劑量水平,重建的PET圖像可以分為標準劑量(sPET)和低劑量PET(lPET)圖像。與lPET相比,sPET圖像具有更高的信噪比(SNR)、更多的結(jié)構(gòu)特征和更優(yōu)的整體圖像質(zhì)量。然而,sPET將不可避免地導致高累積輻射暴露。受這些挑戰(zhàn)的激勵,人們已經(jīng)在圖像分析方法上取得了進展,旨在在保持低注射劑量的同時從lPET圖像中恢復sPET圖像。
基于卷積神經(jīng)網(wǎng)絡(luò)(CNN)的深度學習方法在醫(yī)學圖像分析相關(guān)任務(wù)中取得了巨大成功,例如自動腫瘤分割和分類 。研究人員還使用深度學習從lPET圖像中恢復sPET圖像 。Xiang等人 結(jié)合了多個CNN模塊,采用自動上下文策略通過多次迭代估計sPET,他們采用了膨脹內(nèi)核的U-Net架構(gòu)來增加感受野。不幸的是,CNN中的池化層用于減少特征圖的空間分辨率,這可能導致信息丟失和細節(jié)丟失,例如邊緣和紋理。為了解決這些挑戰(zhàn),研究人員嘗試使用生成對抗網(wǎng)絡(luò)(GAN)通過擴展網(wǎng)絡(luò),并配備一個鑒別器來區(qū)分真/合成圖像,以保留結(jié)構(gòu)細節(jié)。Bi等人 開發(fā)了多通道GAN,以從相應(yīng)的CT掃描和腫瘤標簽合成高質(zhì)量PET。通過使用條件GAN模型和對抗性訓練策略,Wang等人能夠從低劑量PET恢復全劑量PET圖像 。然而,基于GAN的方法在恢復高維細節(jié)(如上下文信息和臨床意義的紋理特征,如圖像的強度值)方面存在困難。這主要是因為這些方法沒有探索sPET和lPET圖像之間的空間相關(guān)性。為了克服GAN引起的偽影,第二階段的細化是糾正前一階段造成的映射誤差的理想方式。受這一未滿足需求的啟發(fā),我們精心設(shè)計了一個超分辨率網(wǎng)絡(luò) - Contextual-Net,目標是重建更多的上下文細節(jié),以使精制的合成PET與真實的sPET空間對齊。
任務(wù)驅(qū)動策略在醫(yī)學圖像分析領(lǐng)域已經(jīng)被積極研究 ?;旧?,任務(wù)驅(qū)動的方法引入
Title
題目
CG-3DSRGAN: A classification guided 3D generative adversarial network for image quality recovery from low-dose PET images
CG-3DSRGAN:用于從低劑量PET圖像恢復圖像質(zhì)量的分類指導的3D生成對抗網(wǎng)絡(luò)
Abstract
摘要
Positron emission tomography (PET) is the most sensitive molecular imaging modality routinely applied in our modern healthcare. High radioactivity caused by the injected tracer dose is a major concern in PET imaging and limits its clinical applications. However, reducing the dose leads to inadequate image quality for diagnostic practice. Motivated by**the need to produce high quality images with minimum ‘low dose’, convolutional neural networks (CNNs) based methods have been developed for high quality PET synthesis from its low dose counterparts. Previous CNNs-based studies usually directly map low-dose PET into features space without consideration of different dose reduction level. In this study, a novel approach named CG-3DSRGAN (Classification-Guided Generative Adversarial Network with Super Resolution Refinement) is presented. Specifically, a multi-tasking coarse generator, guided by a classification head, allows for a more comprehensive understanding of the noise-level features present in the low-dose data, resulting in improved image synthesis. Moreover, to recover spatial details of standard PET, an auxiliary super resolution network – Contextual-Net – is proposed as a second stage training to narrow the gap between coarse prediction and standard PET. We compared our method to the state-of-the-art methods on whole-body PET with different dose reduction factors (DRF). Experiments demonstrate our method can outperform others on all DRF.
正電子發(fā)射斷層掃描(PET)是我們現(xiàn)代醫(yī)療中常用的最敏感的分子成像方式。由注射的示蹤劑劑量引起的高放射性是PET成像的一個主要問題,限制了其臨床應(yīng)用。然而,減少劑量會導致診斷實踐中圖像質(zhì)量不足?;谛枰褂米畹汀暗蛣┝俊碑a(chǎn)生高質(zhì)量圖像的動機,基于卷積神經(jīng)網(wǎng)絡(luò)(CNNs)的方法已經(jīng)被開發(fā)出來,用于從低劑量PET合成高質(zhì)量PET。以往基于CNN的研究通常直接將低劑量PET映射到特征空間,而沒有考慮不同的劑量減少級別。在這項研究中,提出了一個名為CG-3DSRGAN(分類引導的具有超分辨率細化的生成對抗網(wǎng)絡(luò))的新方法。具體來說,一個由分類頭引導的多任務(wù)粗生成器,使我們能夠更全面地理解低劑量數(shù)據(jù)中存在的噪聲級特征,從而改善圖像合成。此外,為了恢復標準PET的空間細節(jié),提出了一個輔助超分辨率網(wǎng)絡(luò) - Contextual-Net - 作為第二階段訓練,以縮小粗預(yù)測與標準PET之間的差距。我們將我們的方法與不同劑量減少因素(DRF)的全身PET的最新方法進行了比較。實驗表明我們的方法可以在所有DRF上勝過其他方法。
Methods
方法
In total, 90 patients from three centers (30 CT-MR prostate pairs/center) underwent treatment using volumetric modulated arc therapy for prostate cancer (PCa) (60 Gy in 20 fractions). T2 MRI images were acquired in addition to computed tomography (CT) images for treatment planning. The DL model was a 2D supervised conditional generative adversarial network (Pix2Pix). Patient images underwent preprocessing steps, including nonrigid registration. Seven different supervised models were trained, incorporating patients from one, two, or three centers. Each model was trained on 24 CT-MR prostate pairs. A generic model was trained using patients from all three centers. To compare sCT and CT, the mean absolute error in Hounsfield units was calculated for the entire pelvis, prostate, bladder, rectum, and bones. For dose analysis, mean dose differences of D99% for CTV, V95% for PTV, Dmax for rectum and bladder, and 3D gamma analysis (local, 1%/1mm) were calculated from CT and sCT. Furthermore, Wilcoxon tests were performed to compare the image and dose results obtained with the generic model to those with the other trained models.
總共有90名患者來自三個中心(每個中心30對CT-MR前列腺圖像對),接受了體積調(diào)制弧放射療法治療前列腺癌(60 Gy,分20次)。除了用于治療規(guī)劃的計算機斷層掃描(CT)圖像外,還獲取了T2磁共振成像(MRI)圖像。深度學習模型是一個2D監(jiān)督條件生成對抗網(wǎng)絡(luò)(Pix2Pix)。患者圖像經(jīng)過預(yù)處理步驟,包括非剛性配準。共訓練了七個不同的監(jiān)督模型,包括來自一個、兩個或三個中心的患者。每個模型都是在24對CT-MR前列腺圖像上進行訓練的。使用來自所有三個中心的患者進行了通用模型的訓練。為了比較sCT和CT,計算了在Hounsfield單位中的整個骨盆、前列腺、膀胱、直腸和骨骼的平均絕對誤差。對于劑量分析,從CT和sCT計算了CTV的D99%的平均劑量差異,PTV的V95%,直腸和膀胱的Dmax,以及3D伽馬分析(局部,1%/1mm)。此外,進行了Wilcoxon檢驗,以比較通用模型與其他訓練模型獲得的圖像和劑量結(jié)果
**Conclusions
**
結(jié)論
In this work, we proposed a novel high quality PET synthesis model – Classification-guided 3D super resolution GAN. Our results demonstrated the proposed method has optimal performance on standard PET synthesis from lPET quantitively and qualitatively compared to other existing state of-the-art. As future work, we will investigate the use of residual estimation that can further improve synthesis sPET quality by minimizing the distribution gap between prediction and ground truth. Self-supervised pre-training strategy may also further improve the model representation.
在這項工作中,我們提出了一個新穎的高質(zhì)量PET合成模型——分類引導的3D超分辨率生成對抗網(wǎng)絡(luò)(GAN)。我們的結(jié)果表明,與現(xiàn)有的最先進技術(shù)相比,所提出的方法在從lPET合成標準PET方面在定量和定性上都具有最優(yōu)性能。作為未來的工作,我們將研究使用殘差估計,這可以通過最小化預(yù)測與真實值之間的分布差距,進一步提高合成sPET的質(zhì)量。自監(jiān)督的預(yù)訓練策略也可能進一步提高模型的表現(xiàn)。
Figure
圖
Figure 1. The workflow of proposed CG-3DSRGAN.
圖1. 所提出的CG-3DSRGAN的工作流程。
Figure 2. Visualization Results of synthetic sPET from four methods on DRF 100 dataset.
圖2. 四種方法在DRF 100數(shù)據(jù)集上合成sPET的可視化結(jié)果。
Table
表
TABLE I. quantitaice compapison results (mean with standard deviation)of the imafes synthesized by different methods for different methods for different drfs
表I. 不同DRFS下不同方法合成圖像的定量比較結(jié)果(均值及標準偏差)
TABLE II.quantitative comparison results of different variants of CG-3DSRGAN文章來源:http://www.zghlxwxcb.cn/news/detail-766368.html
表II.CG-3DSRGAN不同變體的定量比較結(jié)果文章來源地址http://www.zghlxwxcb.cn/news/detail-766368.html
到了這里,關(guān)于文獻速遞:生成對抗網(wǎng)絡(luò)醫(yī)學影像中的應(yīng)用—— CG-3DSRGAN:用于從低劑量PET圖像恢復圖像質(zhì)量的分類指導的3D生成對抗網(wǎng)絡(luò)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!