我們通常把模型的各個(gè)組成成分分成 6 種類型:
編碼器(encoder):包括 voxel encoder 和 middle encoder 等進(jìn)入 backbone 前所使用的基于體素的方法,如?
HardVFE
?和?PointPillarsScatter
。骨干網(wǎng)絡(luò)(backbone):通常采用 FCN 網(wǎng)絡(luò)來提取特征圖,如?
ResNet
?和?SECOND
。頸部網(wǎng)絡(luò)(neck):位于 backbones 和 heads 之間的組成模塊,如?
FPN
?和?SECONDFPN
。檢測(cè)頭(head):用于特定任務(wù)的組成模塊,如
檢測(cè)框的預(yù)測(cè)
和掩碼的預(yù)測(cè)
。RoI 提取器(RoI extractor):用于從特征圖中提取 RoI 特征的組成模塊,如?
H3DRoIHead
?和?PartAggregationROIHead
。損失函數(shù)(loss):heads 中用于計(jì)算損失函數(shù)的組成模塊,如?
FocalLoss
、L1Loss
?和?GHMLoss
。
?一、自定義模型?
添加新的編碼器
接下來我們以 HardVFE 為例展示如何開發(fā)新的組成模塊。
1. 定義一個(gè)新的體素編碼器(如 HardVFE:即 HV-SECOND 中使用的體素特征編碼器)
創(chuàng)建一個(gè)新文件?mmdet3d/models/voxel_encoders/voxel_encoder.py
。
import torch.nn as nn
from mmdet3d.registry import MODELS
@MODELS.register_module()
class HardVFE(nn.Module):
def __init__(self, arg1, arg2):
pass
def forward(self, x): # 需要返回一個(gè)元組
pass
2. 導(dǎo)入該模塊
您可以在?mmdet3d/models/voxel_encoders/__init__.py
?中添加以下代碼:
from .voxel_encoder import HardVFE
或者在配置文件中添加以下代碼,從而避免修改源碼:
custom_imports = dict(
imports=['mmdet3d.models.voxel_encoders.voxel_encoder'],
allow_failed_imports=False)
3. 在配置文件中使用體素編碼器
model = dict(
...
voxel_encoder=dict(
type='HardVFE',
arg1=xxx,
arg2=yyy),
...
)
添加新的骨干網(wǎng)絡(luò)
接下來我們以?SECOND(Sparsely Embedded Convolutional Detection)為例展示如何開發(fā)新的組成模塊。
1. 定義一個(gè)新的骨干網(wǎng)絡(luò)(如 SECOND)
創(chuàng)建一個(gè)新文件?mmdet3d/models/backbones/second.py
。
from mmengine.model import BaseModule
from mmdet3d.registry import MODELS
@MODELS.register_module()
class SECOND(BaseModule):
def __init__(self, arg1, arg2):
pass
def forward(self, x): # 需要返回一個(gè)元組
pass
2. 導(dǎo)入該模塊
您可以在?mmdet3d/mod
model = dict(
...
backbone=dict(
type='SECOND',
arg1=xxx,
arg2=yyy),
...
)
els/backbones/__init__.py
?中添加以下代碼:
from .second import SECOND
或者在配置文件中添加以下代碼,從而避免修改源碼:
custom_imports = dict(
imports=['mmdet3d.models.backbones.second'],
allow_failed_imports=False)
3. 在配置文件中使用骨干網(wǎng)絡(luò)
model = dict(
...
backbone=dict(
type='SECOND',
arg1=xxx,
arg2=yyy),
...
)
添加新的頸部網(wǎng)絡(luò)
1. 定義一個(gè)新的頸部網(wǎng)絡(luò)(如 SECONDFPN)
創(chuàng)建一個(gè)新文件?mmdet3d/models/necks/second_fpn.py
。
from mmengine.model import BaseModule
from mmdet3d.registry import MODELS
@MODELS.register_module()
class SECONDFPN(BaseModule):
def __init__(self,
in_channels=[128, 128, 256],
out_channels=[256, 256, 256],
upsample_strides=[1, 2, 4],
norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01),
upsample_cfg=dict(type='deconv', bias=False),
conv_cfg=dict(type='Conv2d', bias=False),
use_conv_for_no_stride=False,
init_cfg=None):
pass
def forward(self, x):
# 具體實(shí)現(xiàn)忽略
pass
2. 導(dǎo)入該模塊
您可以在?mmdet3d/models/necks/__init__.py
?中添加以下代碼:
from .second_fpn import SECONDFPN
或者在配置文件中添加以下代碼,從而避免修改源碼:
custom_imports = dict(
imports=['mmdet3d.models.necks.second_fpn'],
allow_failed_imports=False)
3. 在配置文件中使用頸部網(wǎng)絡(luò)
model = dict(
...
neck=dict(
type='SECONDFPN',
in_channels=[64, 128, 256],
upsample_strides=[1, 2, 4],
out_channels=[128, 128, 128]),
...
)
添加新的檢測(cè)頭
接下來我們以?PartA2 Head?為例展示如何開發(fā)新的檢測(cè)頭。
注意:此處展示的?PartA2?RoI?Head
?將用于檢測(cè)器的第二階段。對(duì)于單階段的檢測(cè)頭,請(qǐng)參考?mmdet3d/models/dense_heads/
?中的例子。由于其簡單高效,它們更常用于自動(dòng)駕駛場(chǎng)景下的 3D 檢測(cè)中。
首先,在?mmdet3d/models/roi_heads/bbox_heads/parta2_bbox_head.py
?中添加新的 bbox head。PartA2?RoI?Head
?為目標(biāo)檢測(cè)實(shí)現(xiàn)了一個(gè)新的 bbox head。為了實(shí)現(xiàn)一個(gè) bbox head,我們通常需要在新模塊中實(shí)現(xiàn)如下兩個(gè)函數(shù)。有時(shí)還需要實(shí)現(xiàn)其他相關(guān)函數(shù),如?loss
?和?get_targets
。
from mmengine.model import BaseModule
from mmdet3d.registry import MODELS
@MODELS.register_module()
class PartA2BboxHead(BaseModule):
"""PartA2 RoI head."""
def __init__(self,
num_classes,
seg_in_channels,
part_in_channels,
seg_conv_channels=None,
part_conv_channels=None,
merge_conv_channels=None,
down_conv_channels=None,
shared_fc_channels=None,
cls_channels=None,
reg_channels=None,
dropout_ratio=0.1,
roi_feat_size=14,
with_corner_loss=True,
bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'),
conv_cfg=dict(type='Conv1d'),
norm_cfg=dict(type='BN1d', eps=1e-3, momentum=0.01),
loss_bbox=dict(
type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0),
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=True,
reduction='none',
loss_weight=1.0),
init_cfg=None):
super(PartA2BboxHead, self).__init__(init_cfg=init_cfg)
def forward(self, seg_feats, part_feats):
pass
其次,如果有必要的話需要實(shí)現(xiàn)一個(gè)新的 RoI Head。我們從?Base3DRoIHead
?中繼承得到新的?PartAggregationROIHead
。我們可以發(fā)現(xiàn)?Base3DRoIHead
?已經(jīng)實(shí)現(xiàn)了如下函數(shù)。
from mmdet.models.roi_heads import BaseRoIHead
from mmdet3d.registry import MODELS, TASK_UTILS
class Base3DRoIHead(BaseRoIHead):
"""Base class for 3d RoIHeads."""
def __init__(self,
bbox_head=None,
bbox_roi_extractor=None,
mask_head=None,
mask_roi_extractor=None,
train_cfg=None,
test_cfg=None,
init_cfg=None):
super(Base3DRoIHead, self).__init__(
bbox_head=bbox_head,
bbox_roi_extractor=bbox_roi_extractor,
mask_head=mask_head,
mask_roi_extractor=mask_roi_extractor,
train_cfg=train_cfg,
test_cfg=test_cfg,
init_cfg=init_cfg)
def init_bbox_head(self, bbox_roi_extractor: dict,
bbox_head: dict) -> None:
"""Initialize box head and box roi extractor.
Args:
bbox_roi_extractor (dict or ConfigDict): Config of box
roi extractor.
bbox_head (dict or ConfigDict): Config of box in box head.
"""
self.bbox_roi_extractor = MODELS.build(bbox_roi_extractor)
self.bbox_head = MODELS.build(bbox_head)
def init_assigner_sampler(self):
"""Initialize assigner and sampler."""
self.bbox_assigner = None
self.bbox_sampler = None
if self.train_cfg:
if isinstance(self.train_cfg.assigner, dict):
self.bbox_assigner = TASK_UTILS.build(self.train_cfg.assigner)
elif isinstance(self.train_cfg.assigner, list):
self.bbox_assigner = [
TASK_UTILS.build(res) for res in self.train_cfg.assigner
]
self.bbox_sampler = TASK_UTILS.build(self.train_cfg.sampler)
def init_mask_head(self):
"""Initialize mask head, skip since ``PartAggregationROIHead`` does not
have one."""
pass
接下來主要對(duì) bbox_forward 的邏輯進(jìn)行修改,同時(shí)其繼承了來自?Base3DRoIHead
?的其它邏輯。在?mmdet3d/models/roi_heads/part_aggregation_roi_head.py
?中,我們實(shí)現(xiàn)了新的 RoI Head,如下所示:
from typing import Dict, List, Tuple
from mmdet.models.task_modules import AssignResult, SamplingResult
from mmengine import ConfigDict
from torch import Tensor
from torch.nn import functional as F
from mmdet3d.registry import MODELS
from mmdet3d.structures import bbox3d2roi
from mmdet3d.utils import InstanceList
from ...structures.det3d_data_sample import SampleList
from .base_3droi_head import Base3DRoIHead
@MODELS.register_module()
class PartAggregationROIHead(Base3DRoIHead):
"""Part aggregation roi head for PartA2.
Args:
semantic_head (ConfigDict): Config of semantic head.
num_classes (int): The number of classes.
seg_roi_extractor (ConfigDict): Config of seg_roi_extractor.
bbox_roi_extractor (ConfigDict): Config of part_roi_extractor.
bbox_head (ConfigDict): Config of bbox_head.
train_cfg (ConfigDict): Training config.
test_cfg (ConfigDict): Testing config.
"""
def __init__(self,
semantic_head: dict,
num_classes: int = 3,
seg_roi_extractor: dict = None,
bbox_head: dict = None,
bbox_roi_extractor: dict = None,
train_cfg: dict = None,
test_cfg: dict = None,
init_cfg: dict = None) -> None:
super(PartAggregationROIHead, self).__init__(
bbox_head=bbox_head,
bbox_roi_extractor=bbox_roi_extractor,
train_cfg=train_cfg,
test_cfg=test_cfg,
init_cfg=init_cfg)
self.num_classes = num_classes
assert semantic_head is not None
self.init_seg_head(seg_roi_extractor, semantic_head)
def init_seg_head(self, seg_roi_extractor: dict,
semantic_head: dict) -> None:
"""Initialize semantic head and seg roi extractor.
Args:
seg_roi_extractor (dict): Config of seg
roi extractor.
semantic_head (dict): Config of semantic head.
"""
self.semantic_head = MODELS.build(semantic_head)
self.seg_roi_extractor = MODELS.build(seg_roi_extractor)
@property
def with_semantic(self):
"""bool: whether the head has semantic branch"""
return hasattr(self,
'semantic_head') and self.semantic_head is not None
def predict(self,
feats_dict: Dict,
rpn_results_list: InstanceList,
batch_data_samples: SampleList,
rescale: bool = False,
**kwargs) -> InstanceList:
"""Perform forward propagation of the roi head and predict detection
results on the features of the upstream network.
Args:
feats_dict (dict): Contains features from the first stage.
rpn_results_list (List[:obj:`InstanceData`]): Detection results
of rpn head.
batch_data_samples (List[:obj:`Det3DDataSample`]): The Data
samples. It usually includes information such as
`gt_instance_3d`, `gt_panoptic_seg_3d` and `gt_sem_seg_3d`.
rescale (bool): If True, return boxes in original image space.
Defaults to False.
Returns:
list[:obj:`InstanceData`]: Detection results of each sample
after the post process.
Each item usually contains following keys.
- scores_3d (Tensor): Classification scores, has a shape
(num_instances, )
- labels_3d (Tensor): Labels of bboxes, has a shape
(num_instances, ).
- bboxes_3d (BaseInstance3DBoxes): Prediction of bboxes,
contains a tensor with shape (num_instances, C), where
C >= 7.
"""
assert self.with_bbox, 'Bbox head must be implemented in PartA2.'
assert self.with_semantic, 'Semantic head must be implemented' \
' in PartA2.'
batch_input_metas = [
data_samples.metainfo for data_samples in batch_data_samples
]
voxels_dict = feats_dict.pop('voxels_dict')
# TODO: Split predict semantic and bbox
results_list = self.predict_bbox(feats_dict, voxels_dict,
batch_input_metas, rpn_results_list,
self.test_cfg)
return results_list
def predict_bbox(self, feats_dict: Dict, voxel_dict: Dict,
batch_input_metas: List[dict],
rpn_results_list: InstanceList,
test_cfg: ConfigDict) -> InstanceList:
"""Perform forward propagation of the bbox head and predict detection
results on the features of the upstream network.
Args:
feats_dict (dict): Contains features from the first stage.
voxel_dict (dict): Contains information of voxels.
batch_input_metas (list[dict], Optional): Batch image meta info.
Defaults to None.
rpn_results_list (List[:obj:`InstanceData`]): Detection results
of rpn head.
test_cfg (Config): Test config.
Returns:
list[:obj:`InstanceData`]: Detection results of each sample
after the post process.
Each item usually contains following keys.
- scores_3d (Tensor): Classification scores, has a shape
(num_instances, )
- labels_3d (Tensor): Labels of bboxes, has a shape
(num_instances, ).
- bboxes_3d (BaseInstance3DBoxes): Prediction of bboxes,
contains a tensor with shape (num_instances, C), where
C >= 7.
"""
...
def loss(self, feats_dict: Dict, rpn_results_list: InstanceList,
batch_data_samples: SampleList, **kwargs) -> dict:
"""Perform forward propagation and loss calculation of the detection
roi on the features of the upstream network.
Args:
feats_dict (dict): Contains features from the first stage.
rpn_results_list (List[:obj:`InstanceData`]): Detection results
of rpn head.
batch_data_samples (List[:obj:`Det3DDataSample`]): The Data
samples. It usually includes information such as
`gt_instance_3d`, `gt_panoptic_seg_3d` and `gt_sem_seg_3d`.
Returns:
dict[str, Tensor]: A dictionary of loss components
"""
assert len(rpn_results_list) == len(batch_data_samples)
losses = dict()
batch_gt_instances_3d = []
batch_gt_instances_ignore = []
voxels_dict = feats_dict.pop('voxels_dict')
for data_sample in batch_data_samples:
batch_gt_instances_3d.append(data_sample.gt_instances_3d)
if 'ignored_instances' in data_sample:
batch_gt_instances_ignore.append(data_sample.ignored_instances)
else:
batch_gt_instances_ignore.append(None)
if self.with_semantic:
semantic_results = self._semantic_forward_train(
feats_dict, voxels_dict, batch_gt_instances_3d)
losses.update(semantic_results.pop('loss_semantic'))
sample_results = self._assign_and_sample(rpn_results_list,
batch_gt_instances_3d)
if self.with_bbox:
feats_dict.update(semantic_results)
bbox_results = self._bbox_forward_train(feats_dict, voxels_dict,
sample_results)
losses.update(bbox_results['loss_bbox'])
return losses
此處我們省略了相關(guān)函數(shù)的更多細(xì)節(jié)。更多細(xì)節(jié)請(qǐng)參考代碼。
最后,用戶需要在?mmdet3d/models/roi_heads/bbox_heads/__init__.py
?和?mmdet3d/models/roi_heads/__init__.py
?添加模塊,從而能被相應(yīng)的注冊(cè)器找到并加載。
此外,用戶也可以在配置文件中添加以下代碼以達(dá)到相同的目的。
custom_imports=dict(
imports=['mmdet3d.models.roi_heads.part_aggregation_roi_head', 'mmdet3d.models.roi_heads.bbox_heads.parta2_bbox_head'],
allow_failed_imports=False)
PartAggregationROIHead
?的配置文件如下所示:
model = dict(
...
roi_head=dict(
type='PartAggregationROIHead',
num_classes=3,
semantic_head=dict(
type='PointwiseSemanticHead',
in_channels=16,
extra_width=0.2,
seg_score_thr=0.3,
num_classes=3,
loss_seg=dict(
type='mmdet.FocalLoss',
use_sigmoid=True,
reduction='sum',
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_part=dict(
type='mmdet.CrossEntropyLoss',
use_sigmoid=True,
loss_weight=1.0)),
seg_roi_extractor=dict(
type='Single3DRoIAwareExtractor',
roi_layer=dict(
type='RoIAwarePool3d',
out_size=14,
max_pts_per_voxel=128,
mode='max')),
bbox_roi_extractor=dict(
type='Single3DRoIAwareExtractor',
roi_layer=dict(
type='RoIAwarePool3d',
out_size=14,
max_pts_per_voxel=128,
mode='avg')),
bbox_head=dict(
type='PartA2BboxHead',
num_classes=3,
seg_in_channels=16,
part_in_channels=4,
seg_conv_channels=[64, 64],
part_conv_channels=[64, 64],
merge_conv_channels=[128, 128],
down_conv_channels=[128, 256],
bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'),
shared_fc_channels=[256, 512, 512, 512],
cls_channels=[256, 256],
reg_channels=[256, 256],
dropout_ratio=0.1,
roi_feat_size=14,
with_corner_loss=True,
loss_bbox=dict(
type='mmdet.SmoothL1Loss',
beta=1.0 / 9.0,
reduction='sum',
loss_weight=1.0),
loss_cls=dict(
type='mmdet.CrossEntropyLoss',
use_sigmoid=True,
reduction='sum',
loss_weight=1.0))),
...
)
MMDetection 2.0 開始支持配置文件之間的繼承,因此用戶可以關(guān)注配置文件的修改。PartA2 Head 的第二階段主要使用了新的?PartAggregationROIHead
?和?PartA2BboxHead
,需要根據(jù)對(duì)應(yīng)模塊的?__init__
?函數(shù)來設(shè)置參數(shù)。
二、自定義運(yùn)行時(shí)配置
自定義優(yōu)化器設(shè)置
優(yōu)化器相關(guān)的配置是由?optim_wrapper
?管理的,其通常有三個(gè)字段:optimizer
,paramwise_cfg
,clip_grad
。更多細(xì)節(jié)請(qǐng)參考?OptimWrapper。如下所示,使用?AdamW
?作為優(yōu)化器
,骨干網(wǎng)絡(luò)的學(xué)習(xí)率降低 10 倍,并添加了梯度裁剪。
optim_wrapper = dict(
type='OptimWrapper',
# 優(yōu)化器
optimizer=dict(
type='AdamW',
lr=0.0001,
weight_decay=0.05,
eps=1e-8,
betas=(0.9, 0.999)),
# 參數(shù)級(jí)學(xué)習(xí)率及權(quán)重衰減系數(shù)設(shè)置
paramwise_cfg=dict(
custom_keys={
'backbone': dict(lr_mult=0.1, decay_mult=1.0),
},
norm_decay_mult=0.0),
# 梯度裁剪
clip_grad=dict(max_norm=0.01, norm_type=2))
自定義 PyTorch 支持的優(yōu)化器
我們已經(jīng)支持使用所有 PyTorch 實(shí)現(xiàn)的優(yōu)化器,且唯一需要修改的地方就是改變配置文件中的?optim_wrapper
?字段中的?optimizer
?字段。例如,如果您想使用?Adam
(注意這樣可能會(huì)使性能大幅下降),您可以這樣修改:
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='Adam', lr=0.0003, weight_decay=0.0001))
為了修改模型的學(xué)習(xí)率,用戶只需要修改?optimizer
?中的?lr
?字段。用戶可以根據(jù) PyTorch 的?API 文檔直接設(shè)置參數(shù)。
自定義訓(xùn)練調(diào)度
默認(rèn)情況下我們使用階梯式學(xué)習(xí)率衰減的 1 倍訓(xùn)練調(diào)度,這會(huì)調(diào)用 MMEngine 中的?MultiStepLR。我們?cè)谶@里支持了很多其他學(xué)習(xí)率調(diào)度,比如余弦退火
和多項(xiàng)式衰減
調(diào)度。下面是一些樣例:
多項(xiàng)式衰減調(diào)度
param_scheduler = [
dict(
type='PolyLR',
power=0.9,
eta_min=1e-4,
begin=0,
end=8,
by_epoch=True)]
余弦退火調(diào)度
param_scheduler = [
dict(
type='CosineAnnealingLR',
T_max=8,
eta_min=lr * 1e-5,
begin=0,
end=8,
by_epoch=True)]
自定義訓(xùn)練循環(huán)控制器
默認(rèn)情況下,我們?cè)?train_cfg
?中使用?EpochBasedTrainLoop
,并在每一個(gè)訓(xùn)練 epoch 完成后進(jìn)行一次驗(yàn)證,如下所示:
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=12, val_begin=1, val_interval=1)
三、3D檢測(cè)
數(shù)據(jù)準(zhǔn)備
首先,我們需要下載原始數(shù)據(jù)并按照數(shù)據(jù)準(zhǔn)備文檔中提供的標(biāo)準(zhǔn)方式重新組織數(shù)據(jù)。
由于不同數(shù)據(jù)集的原始數(shù)據(jù)有不同的組織方式,我們通常需要用?.pkl
?文件收集有用的數(shù)據(jù)信息。因此,在準(zhǔn)備好所有的原始數(shù)據(jù)之后,我們需要運(yùn)行?create_data.py
?中提供的腳本來為不同的數(shù)據(jù)集生成數(shù)據(jù)集信息。例如,對(duì)于 KITTI,我們需要運(yùn)行如下命令:
python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti
隨后,相關(guān)的目錄結(jié)構(gòu)將如下所示:
mmdetection3d
├── mmdet3d
├── tools
├── configs
├── data
│ ├── kitti
│ │ ├── ImageSets
│ │ ├── testing
│ │ │ ├── calib
│ │ │ ├── image_2
│ │ │ ├── velodyne
│ │ │ ├── velodyne_reduced
│ │ ├── training
│ │ │ ├── calib
│ │ │ ├── image_2
│ │ │ ├── label_2
│ │ │ ├── velodyne
│ │ │ ├── velodyne_reduced
│ │ ├── kitti_gt_database
│ │ ├── kitti_infos_train.pkl
│ │ ├── kitti_infos_trainval.pkl
│ │ ├── kitti_infos_val.pkl
│ │ ├── kitti_infos_test.pkl
│ │ ├── kitti_dbinfos_train.pkl
訓(xùn)練
接著,我們將使用提供的配置文件訓(xùn)練 PointPillars。當(dāng)您使用不同的 GPU 設(shè)置進(jìn)行訓(xùn)練時(shí),您可以按照這個(gè)教程的示例。假設(shè)我們?cè)谝慌_(tái)具有 8 塊 GPU 的機(jī)器上使用分布式訓(xùn)練:
./tools/dist_train.sh configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py 8
注意,配置文件名中的?8xb6
?是指訓(xùn)練用了 8 塊 GPU,每塊 GPU 上有 6 個(gè)數(shù)據(jù)樣本。如果您的自定義設(shè)置不同于此,那么有時(shí)候您需要相應(yīng)地調(diào)整學(xué)習(xí)率?;疽?guī)則可以參考此處。我們已經(jīng)支持了使用?--auto-scale-lr
?來自動(dòng)縮放學(xué)習(xí)率。
四、模型讀取
通過配置類的 fromfile 接口讀取配置文件:
test_int = 1
test_list = [1, 2, 3]
test_dict = dict(key1='value1', key2=0.1)?
from mmengine.config import Config
cfg = Config.fromfile('learn_read_config.py')
print(cfg)
Config (path: learn_read_config.py): {'test_int': 1, 'test_list': [1, 2, 3], 'test_dict': {'key1': 'value1', 'key2': 0.1}}?
配置文件的導(dǎo)出
在啟動(dòng)訓(xùn)練腳本時(shí),用戶可能通過傳參的方式來修改配置文件的部分字段,為此我們提供了?dump
?接口來導(dǎo)出更改后的配置文件。與讀取配置文件類似,用戶可以通過?cfg.dump('config.xxx')
?來選擇導(dǎo)出文件的格式。dump
?同樣可以導(dǎo)出有繼承關(guān)系的配置文件,導(dǎo)出的文件可以被獨(dú)立使用,不再依賴于?_base_
?中定義的文件。
基于繼承一節(jié)定義的?resnet50.py
,我們將其加載后導(dǎo)出:
cfg = Config.fromfile('resnet50.py')
cfg.dump('resnet50_dump.py')
resnet50_dump.py
文章來源:http://www.zghlxwxcb.cn/news/detail-542232.html
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
model = dict(type='ResNet', depth=50)
?文章來源地址http://www.zghlxwxcb.cn/news/detail-542232.html
到了這里,關(guān)于MMDetection3D簡單學(xué)習(xí)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!