Intel RealSense SDK 2.0 是跨平臺(tái)的開發(fā)套裝,包含了基本的相機(jī)使用工具如 realsense-viewer,也為二次開發(fā)提供了豐富的接口,包括 ROS,python , Matlab, node.js, LabVIEW, OpenCV, PCL, .NET 等。
這次使用的攝像頭是D435i
它可以提供深度和 RGB 圖像,而且?guī)в?IMU
本次流程就是想使用D435i攝像頭進(jìn)行SLAM的點(diǎn)云建圖
進(jìn)行安裝Intel RealSense SDK,查看攝像頭的數(shù)據(jù)
Linux/Ubuntu - 實(shí)感 SDK 2.0 構(gòu)建指南
安裝完成后,查看相機(jī)的深度和 RGB 圖像
realsense-viewer
?
然后這里下載驅(qū)動(dòng) intel 的各個(gè)攝像頭的 ROS 框架的驅(qū)動(dòng)功能包
適用于英特爾?實(shí)感?設(shè)備的 ROS 封裝器
?
當(dāng)安裝完成后便可以啟動(dòng)看看攝像頭的數(shù)據(jù)
我們這里繼續(xù)配置SMAL
在啟動(dòng)相機(jī)之前,我們需要設(shè)置一下 realsense2_camera rospack 中的 rs_camera.launch 的文件
?
前者是讓不同傳感器數(shù)據(jù)(depth, RGB, IMU)實(shí)現(xiàn)時(shí)間同步,即具有相同的 timestamp;
后者會(huì)增加若干 rostopic,其中我們比較關(guān)心的是/camera/aligned_depth_to_color/image_raw
,這里的 depth 圖像與 RGB 圖像是對(duì)齊的
<arg name="enable_sync" default="true"/>
<arg name="align_depth" default="true"/>
?
完成后,就可以用如下命令啟動(dòng)相機(jī)
roslaunch realsense2_camera rs_camera.launch
其中關(guān)鍵的是 /camera/color/image_raw
和/camera/aligned_depth_to_color/image_raw
分別對(duì)應(yīng) RGB 圖像和深度圖像
?
然后在Examples/RGB-D
中,新建一個(gè)名為RealSense.yaml
的參數(shù)文件,
?
然后進(jìn)行編輯
#--------------------------------------------------------------------------------------------
# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 909.3504638671875
Camera.fy: 908.5739135742188
Camera.cx: 645.5443725585938
Camera.cy: 369.3554992675781
Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.k3: 0.0
Camera.width: 1280
Camera.height: 720
#Camera frames per second
Camera.fps: 30.0
#IR projector baseline times fx (aprox.)
Camera.bf: 46.01
#Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
#Close/Far threshold. Baseline times.
ThDepth: 40.0
#Deptmap values factor,將深度像素值轉(zhuǎn)化為實(shí)際距離,原來單位是 mm,轉(zhuǎn)化成 m
DepthMapFactor: 1000.0
#ORB Parameters
#--------------------------------------------------------------------------------------------
#ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000
#ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2
#ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8
#ORB Extractor: Fast threshold
#Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
#Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
#You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7
#--------------------------------------------------------------------------------------------
#Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500
?
其中,里面包含了相機(jī)內(nèi)參,可以通過以下獲取,每個(gè)相機(jī)的參數(shù)可能會(huì)有差別
rostopic echo /camera/color/camera_info
?
這里主要在yaml中注意的是
# 相機(jī)標(biāo)定和畸變參數(shù)(OpenCV)
要根據(jù)話題【/camera/color/camera_info】的數(shù)據(jù)進(jìn)行修改
參數(shù)可以對(duì)照網(wǎng)址查看具體的各參數(shù)信息
http://docs.ros.org/en/melodic/api/sensor_msgs/html/msg/CameraInfo.html
?
也就是以下的參數(shù)
Camera.fx: 909.3504638671875
Camera.fy: 908.5739135742188
Camera.cx: 645.5443725585938
Camera.cy: 369.3554992675781
Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.k3: 0.0
Camera.width: 1280
Camera.height: 720
?
完成后保存并退出
然后修改src/pointcloudmapping.cc
文件
1.第128行附近
voxel.setLeafSize (0.02f, 0.02f, 0.02f); //Modified place 1 調(diào)節(jié)點(diǎn)云密度
?
2,第75行左右
p.y = ( m - kf->cy) * p.z / kf->fy;
// p.y = - ( m - kf->cy) * p.z / kf->fy; //Modified place2 與原先添加了負(fù)號(hào),將原本顛倒的點(diǎn)云地圖上下翻轉(zhuǎn),方便觀察
//【后續(xù)測試顯示的pcd完全無法使用。不進(jìn)行上下翻轉(zhuǎn)】
p.r = color.ptr<uchar>(m)[n*3]; //Modified place3 修改顏色顯示
p.g = color.ptr<uchar>(m)[n*3+1];
p.b = color.ptr<uchar>(m)[n*3+2];
#########2023年04月17日 08:54:17 UTC+8:00#########
#########由于原先的截圖有歧義,更換新的圖片#########
?
然后編譯ORB_SLAM2_modified
./build.sh
./build_ros.sh
?
完成編譯后進(jìn)行測試
啟動(dòng)主節(jié)點(diǎn)
roscore
啟動(dòng)攝像頭
roslaunch realsense2_camera rs_camera.launch
?
進(jìn)行建圖,發(fā)現(xiàn)報(bào)錯(cuò)
rosrun ORB_SLAM2 RGBD Vocabulary/ORBvoc.bin Examples/RGB-D/RealSense.yaml /camera/rgb/image_raw:=/camera/color/image_raw /camera/depth_registered/image_raw:=/camera/aligned_depth_to_color/image_raw
?
經(jīng)過查找可能是yaml文件還需要修改
opencv4.0讀取yaml文件錯(cuò)誤 error: (-49:Unknown error code -49) Input file is empty in function ‘open’
需要在yaml文件頂端添添加
%YAML:1.0
?
重新運(yùn)行
rosrun ORB_SLAM2 RGBD Vocabulary/ORBvoc.bin Examples/RGB-D/RealSense.yaml /camera/rgb/image_raw:=/camera/color/image_raw /camera/depth_registered/image_raw:=/camera/aligned_depth_to_color/image_raw
當(dāng)CTRL+C后,可以看到輸出的pcd文件
查看保存的pcd文件
pcl_viewer vslam.pcd
?
至此,基于深度相機(jī) Intel RealSense D435i 實(shí)現(xiàn) ORB SLAM 2 的流程成功
?
不過由于啟動(dòng)SLAM的指令有點(diǎn)復(fù)雜,使用roslauch進(jìn)行優(yōu)化
首先是離線包的啟動(dòng)
當(dāng)啟動(dòng)的命令沒有輸全的時(shí)候,終端會(huì)輸出使用的用法
Usage: rosrun ORB_SLAM2 RGBD path_to_vocabulary path_to_settings
?
首先建立一個(gè)功能包,命名為 slam
cd ~/catkin_ws/src
catkin_create_pkg slam std_msgs rospy roscpp
?
然后在這個(gè)功能包中創(chuàng)建launch文件,我命名為rgbd.launch
進(jìn)行編輯
<launch>
<!--定義全局參數(shù)-->
<arg name="rgb_image" default="/camera/rgb/image_raw"/>
<arg name="path_to_vacabulary" default="/home/heying/ORB_SLAM2_modified/Vocabulary/ORBvoc.bin"/>
<arg name="path_to_settings" default="/home/heying/ORB_SLAM2_modified/Examples/RGB-D/TUM1_ROS.yaml"/>
<!--播放離線包-->
<!-- 注意這里bag文件的路徑必須為絕對(duì)路徑-->
<node pkg="rosbag" type="play" name="player" output="screen"
args=" /home/heying/ORB_SLAM2_modified/Examples/datasets/rgbd_dataset_freiburg1_xyz.bag">
<remap from="/camera/rgb/image_color" to="/camera/rgb/image_raw" />
<remap from="/camera/depth/image" to="/camera/depth_registered/image_raw" />
</node>
<!--啟動(dòng)ORB-SLAM2 RGBD-->
<node name ="RGBD" pkg="ORB_SLAM2" type="RGBD"
args="$(arg path_to_vacabulary) $(arg path_to_settings)" respawn="true" output="screen">
<!--remap from="/camera/image_raw" to="$(arg rgb_image)"/-->
</node>
</launch>
?
完成后測試
roslaunch slam rgbd.launch
?
正常啟動(dòng)
?
實(shí)時(shí)攝像頭
再新建一個(gè)launch文件,用于攝像頭的啟動(dòng)于建圖
touch camera.launch
gedit camera.launch
然后進(jìn)行編寫
首先是全局參數(shù),先不要打全指令,可以看到輸出了使用方法,出來的path_to_vocabulary
和path_to_settings
便是需要設(shè)置的全局參數(shù)
rosrun ORB_SLAM2 RGBD
?
<launch>
<!--定義全局參數(shù)-->
<arg name = "path_to_vocabulary" default = "/home/heying/ORB_SLAM2_modified/Vocabulary/ORBvoc.bin"/>
<arg name = "path_to_settings" default = "/home/heying/ORB_SLAM2_modified/Examples/RGB-D/RealSense.yaml"/>
<!--include file="$(find realsense2_camera)/launch/rs_camera.launch"/-->
<!--啟動(dòng)ORB-SLAM2 RGBD-->
<node pkg = "ORB_SLAM2" type ="RGBD" name = "RGBD_camera" args="$(arg path_to_vocabulary) $(arg path_to_settings)" respawn="true" output="screen">
<remap from="/camera/rgb/image_raw" to="/camera/color/image_raw" />
<remap from="/camera/depth_registered/image_raw" to="/camera/aligned_depth_to_color/image_raw" />
</node>
</launch>
?
測試
開啟主節(jié)點(diǎn)
roscore
然后檢查攝像頭的硬件連接
啟動(dòng)攝像頭
roslaunch realsense2_camera rs_camera.launch
?
啟動(dòng)建圖
roslaunch slam camera.launch
啟動(dòng)正常,效果簡直完美
在程序CTRL+C后會(huì)生成pcd文件
?
查看效果文章來源:http://www.zghlxwxcb.cn/news/detail-407594.html
pcl_viewer vslam.pcd
?
效果良好
?
至此,使用 intel D435i 構(gòu)建SLAM點(diǎn)云地圖與slam攝像頭建圖的啟動(dòng)指令優(yōu)化流程結(jié)束文章來源地址http://www.zghlxwxcb.cn/news/detail-407594.html
到了這里,關(guān)于ORB-SLAM2的布置(五)使用 intel D435i 構(gòu)建SLAM點(diǎn)云地圖的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!