RealSense D455 语义 SLAM:从 RGB-D 到语义八叉树地图
要在 RealSense D455 相机上实现 语义 SLAM(Semantic SLAM) 并生成 语义八叉树地图(Semantic Octree Map),通常需要将 稠密几何建图(如 OctoMap) 与 语义分割(如 Mask R-CNN、YOLO、或 Fast-SCNN) 相结合,并利用 RGB-D 数据(来自 D455 的深度 + 彩色对齐图像)进行融合。
📋 目录
- 整体架构概览
- 核心组件与技术选型
- 实现路径:渐进式开发
- 阶段 1:纯几何 OctoMap
- 阶段 2:2D 语义分割接入
- 阶段 3:3D 语义点云生成
- 阶段 4:语义融合到 OctoMap
- 性能优化技巧
- 参考资源
整体架构概览
RealSense D455
│
├─▶ 配准的 RGB + Depth 图像流
│
├─▶ 位姿估计(SLAM 系统)
│ ├─ RTAB-Map(推荐,支持 OctoMap 导出)
│ ├─ ORB-SLAM3 + OctoMap(需自定义融合)
│ └─ OpenVINS + 外部建图
│
└─▶ 语义分割(2D)
├─ 输出每个像素的语义标签
└─ 与深度图反投影 → 3D 语义点云
│
└─▶ 融合到位姿对齐的 OctoMap → 语义八叉树
数据流详解
┌─────────────────────────────────────────────────────────────────┐
│ RealSense D455 │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ RGB 图像 │ │ 深度图像 │ │ IMU │ │
│ │ 640×480@30 │ │ 640×480@30 │ │ 200Hz │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
└─────────┼────────────────┼────────────────┼─────────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────┐
│ ROS 2 消息传输 │
│ /camera/color /camera/aligned_depth /camera/imu │
│ /image_raw _to_color/image_raw │
└─────────┬────────────────┬────────────────┬─────────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────┐
│ 处理管线 │
│ ┌─────────────┐ ┌─────────────────────────────────────┐ │
│ │ 语义分割 │ │ RTAB-Map │ │
│ │ (YOLOv8) │ │ ┌─────────┐ ┌─────────────────┐ │ │
│ │ │ │ │ 视觉里程计│ │ 闭环检测 │ │ │
│ └──────┬─────┘ │ └────┬────┘ └────────┬────────┘ │ │
│ │ │ │ │ │ │
│ ▼ │ ▼ ▼ │ │
│ ┌─────────────┐ │ ┌─────────────────────────────┐ │ │
│ │ 语义标签图 │ │ │ 位姿图优化 │ │ │
│ │ H×W labels │ │ │ /rtabmap/odom │ │ │
│ └──────┬─────┘ │ └──────────────┬──────────────┘ │ │
│ │ └─────────────────┼───────────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ 语义点云生成器 │ │
│ │ RGB + Depth + Labels + Pose → Semantic PointCloud │ │
│ └────────────────────────┬───────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ 语义 OctoMap 融合 │ │
│ │ ColorOcTree / SemanticOcTree │ │
│ └────────────────────────┬───────────────────────────┘ │
└───────────────────────────┼───────────────────────────────────┘
│
▼
┌───────────────┐
│ 语义八叉树地图 │
│ .sot / .bt │
└───────────────┘
核心组件与技术选型
1. RealSense D455 驱动与数据流
# 安装 RealSense ROS 2 驱动
sudo apt install ros-humble-realsense2-camera
# 启动相机(确保 RGB-D 对齐)
ros2 launch realsense2_camera rs_launch.py \
align_depth.enable:=true \
depth_module.profile:=640x480x30 \
rgb_camera.color_profile:=640x480x30
关键话题:
| 话题 | 类型 | 说明 |
|---|---|---|
/camera/color/image_raw |
sensor_msgs/Image |
RGB 彩色图 |
/camera/aligned_depth_to_color/image_raw |
sensor_msgs/Image |
对齐到彩色的深度图 |
/camera/color/camera_info |
sensor_msgs/CameraInfo |
相机内参 |
/camera/imu |
sensor_msgs/Imu |
IMU 数据 |
2. SLAM 与位姿估计
强烈推荐使用 RTAB-Map:
| 特性 | RTAB-Map | ORB-SLAM3 | OpenVINS |
|---|---|---|---|
| RGB-D 支持 | ✅ 原生支持 | ✅ 支持 | ❌ 仅 VIO |
| OctoMap 集成 | ✅ 内置 | ❌ 需外部 | ❌ 需外部 |
| 闭环检测 | ✅ 强大 | ✅ 支持 | ❌ 无 |
| ROS 2 支持 | ✅ 官方 | ⚠️ 社区 | ✅ 官方 |
| 实时性 | ✅ 30+ FPS | ✅ 30+ FPS | ✅ 100+ FPS |
# 安装 RTAB-Map
sudo apt install ros-humble-rtabmap-ros
# 启动 RTAB-Map(RGB-D 模式)
ros2 launch rtabmap_launch rtabmap.launch.py \
rgb_topic:=/camera/color/image_raw \
depth_topic:=/camera/aligned_depth_to_color/image_raw \
camera_info_topic:=/camera/color/camera_info \
frame_id:=camera_link \
approx_sync:=true
3. 语义分割模型选型
| 模型 | 速度 | 精度 | 适用场景 |
|---|---|---|---|
| YOLOv8-seg | 🚀 实时 | ⭐⭐⭐ | 通用实例分割 |
| Fast-SCNN | 🚀 极快 | ⭐⭐ | 嵌入式/边缘设备 |
| BiSeNetV2 | 🚀 快速 | ⭐⭐⭐ | 实时语义分割 |
| Mask2Former | 🐢 较慢 | ⭐⭐⭐⭐⭐ | 高精度离线处理 |
| MobileSAM | 🚀 实时 | ⭐⭐⭐⭐ | 开放词汇分割 |
4. OctoMap 扩展
| 类型 | 存储内容 | 适用场景 |
|---|---|---|
OcTree |
占用概率 | 纯几何建图 |
ColorOcTree |
占用 + RGB 颜色 | 彩色地图 |
SemanticOcTree |
占用 + 语义标签 | 语义建图(需自定义) |
实现路径:渐进式开发
┌─────────────────────────────────────────────────────────────┐
│ 渐进式开发路线 │
│ │
│ 阶段 1 阶段 2 阶段 3 阶段 4 │
│ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ │
│ │纯几何│ → │2D语义│ → │3D语义│ → │语义 │ │
│ │OctoMap│ │分割 │ │点云 │ │OctoMap│ │
│ └─────┘ └─────┘ └─────┘ └─────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ 验证 SLAM 验证分割 验证融合 完整系统 │
│ + 建图流程 模型性能 坐标变换 语义地图 │
└─────────────────────────────────────────────────────────────┘
阶段 1:纯几何 OctoMap
配置 RTAB-Map 生成 OctoMap
# 启动 RTAB-Map 并启用 OctoMap
ros2 launch rtabmap_launch rtabmap.launch.py \
rgb_topic:=/camera/color/image_raw \
depth_topic:=/camera/aligned_depth_to_color/image_raw \
camera_info_topic:=/camera/color/camera_info \
frame_id:=camera_link \
approx_sync:=true \
Grid/3D:=true \
Grid/FromDepth:=true
导出 OctoMap 文件
# 保存为 .bt 文件(二进制格式)
ros2 service call /rtabmap/octomap_binary rtabmap_msgs/srv/GetMap
# 或使用 rtabmap-export 工具
rtabmap-export --poses --scan --scan_voxel 0.01 ~/.ros/rtabmap.db
可视化
# 使用 octovis 查看
sudo apt install octomap-tools
octovis rtabmap_octomap.bt
阶段 2: 2D 语义分割接入
YOLOv8-seg 集成
#!/usr/bin/env python3
"""ROS 2 语义分割节点 - 基于 YOLOv8"""
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
from ultralytics import YOLO
import numpy as np
class SemanticSegmentationNode(Node):
def __init__(self):
super().__init__('semantic_segmentation')
# 加载 YOLOv8 分割模型
self.model = YOLO('yolov8n-seg.pt') # 或 yolov8s-seg.pt 更精确
self.bridge = CvBridge()
# 订阅 RGB 图像
self.sub = self.create_subscription(
Image,
'/camera/color/image_raw',
self.image_callback,
10
)
# 发布语义标签图
self.pub_labels = self.create_publisher(
Image,
'/semantic/labels',
10
)
# 发布可视化图像
self.pub_vis = self.create_publisher(
Image,
'/semantic/visualization',
10
)
self.get_logger().info('Semantic segmentation node started')
def image_callback(self, msg):
# 转换 ROS Image → OpenCV
cv_image = self.bridge.imgmsg_to_cv2(msg, 'bgr8')
# YOLOv8 推理
results = self.model(cv_image, verbose=False)
if results[0].masks is not None:
# 获取分割掩码
masks = results[0].masks.data.cpu().numpy()
classes = results[0].boxes.cls.cpu().numpy().astype(int)
# 创建语义标签图(H x W,每个像素为类别 ID)
h, w = cv_image.shape[:2]
label_map = np.zeros((h, w), dtype=np.uint8)
for i, (mask, cls) in enumerate(zip(masks, classes)):
# 调整掩码大小
mask_resized = cv2.resize(mask, (w, h)) > 0.5
label_map[mask_resized] = cls + 1 # 0 为背景
# 发布标签图
label_msg = self.bridge.cv2_to_imgmsg(label_map, 'mono8')
label_msg.header = msg.header
self.pub_labels.publish(label_msg)
# 发布可视化
vis_image = results[0].plot()
vis_msg = self.bridge.cv2_to_imgmsg(vis_image, 'bgr8')
vis_msg.header = msg.header
self.pub_vis.publish(vis_msg)
def main():
rclpy.init()
node = SemanticSegmentationNode()
rclpy.spin(node)
rclpy.shutdown()
if __name__ == '__main__':
main()
类别映射配置
# semantic_classes.yaml
classes:
0: background
1: person
2: bicycle
3: car
4: motorcycle
5: airplane
6: bus
7: train
8: truck
# ... COCO 80 类
# 室内场景可使用 ADE20K 或自定义类别
indoor_classes:
0: background
1: wall
2: floor
3: ceiling
4: window
5: door
6: table
7: chair
8: sofa
9: bed
阶段 3: 3D 语义点云生成
深度图反投影原理
像素坐标 (u, v) + 深度 D → 3D 点 (X, Y, Z)
X = (u - cx) * D / fx
Y = (v - cy) * D / fy
Z = D
其中 fx, fy, cx, cy 来自相机内参
语义点云生成节点
#!/usr/bin/env python3
"""3D 语义点云生成节点"""
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image, CameraInfo, PointCloud2, PointField
from message_filters import Subscriber, ApproximateTimeSynchronizer
from cv_bridge import CvBridge
import numpy as np
import struct
class SemanticPointCloudNode(Node):
def __init__(self):
super().__init__('semantic_pointcloud')
self.bridge = CvBridge()
self.camera_info = None
# 订阅相机内参
self.sub_info = self.create_subscription(
CameraInfo,
'/camera/color/camera_info',
self.camera_info_callback,
10
)
# 同步订阅 RGB、深度、语义标签
self.sub_rgb = Subscriber(self, Image, '/camera/color/image_raw')
self.sub_depth = Subscriber(self, Image, '/camera/aligned_depth_to_color/image_raw')
self.sub_labels = Subscriber(self, Image, '/semantic/labels')
self.sync = ApproximateTimeSynchronizer(
[self.sub_rgb, self.sub_depth, self.sub_labels],
queue_size=10,
slop=0.1
)
self.sync.registerCallback(self.sync_callback)
# 发布语义点云
self.pub_cloud = self.create_publisher(
PointCloud2,
'/semantic/pointcloud',
10
)
self.get_logger().info('Semantic pointcloud node started')
def camera_info_callback(self, msg):
self.camera_info = msg
def sync_callback(self, rgb_msg, depth_msg, label_msg):
if self.camera_info is None:
return
# 转换图像
rgb = self.bridge.imgmsg_to_cv2(rgb_msg, 'rgb8')
depth = self.bridge.imgmsg_to_cv2(depth_msg, 'passthrough')
labels = self.bridge.imgmsg_to_cv2(label_msg, 'mono8')
# 获取相机内参
K = np.array(self.camera_info.k).reshape(3, 3)
fx, fy = K[0, 0], K[1, 1]
cx, cy = K[0, 2], K[1, 2]
# 生成点云
h, w = depth.shape
u, v = np.meshgrid(np.arange(w), np.arange(h))
# 深度单位转换(RealSense 深度单位为毫米)
z = depth.astype(np.float32) / 1000.0
# 过滤无效深度
valid = (z > 0.1) & (z < 10.0)
# 反投影
x = (u - cx) * z / fx
y = (v - cy) * z / fy
# 提取有效点
points = np.stack([x[valid], y[valid], z[valid]], axis=-1)
colors = rgb[valid]
semantic_labels = labels[valid]
# 创建 PointCloud2 消息
cloud_msg = self.create_semantic_cloud(
points, colors, semantic_labels, rgb_msg.header
)
self.pub_cloud.publish(cloud_msg)
def create_semantic_cloud(self, points, colors, labels, header):
"""创建带语义标签的 PointCloud2"""
fields = [
PointField(name='x', offset=0, datatype=PointField.FLOAT32, count=1),
PointField(name='y', offset=4, datatype=PointField.FLOAT32, count=1),
PointField(name='z', offset=8, datatype=PointField.FLOAT32, count=1),
PointField(name='rgb', offset=12, datatype=PointField.UINT32, count=1),
PointField(name='label', offset=16, datatype=PointField.UINT8, count=1),
]
# 打包数据
cloud_data = []
for i in range(len(points)):
x, y, z = points[i]
r, g, b = colors[i]
rgb_packed = (int(r) << 16) | (int(g) << 8) | int(b)
label = labels[i]
cloud_data.append(struct.pack('fffIB', x, y, z, rgb_packed, label))
msg = PointCloud2()
msg.header = header
msg.height = 1
msg.width = len(points)
msg.fields = fields
msg.is_bigendian = False
msg.point_step = 17 # 4+4+4+4+1
msg.row_step = msg.point_step * msg.width
msg.data = b''.join(cloud_data)
msg.is_dense = True
return msg
def main():
rclpy.init()
node = SemanticPointCloudNode()
rclpy.spin(node)
rclpy.shutdown()
if __name__ == '__main__':
main()
阶段 4:语义融合到 OctoMap
方案 A:使用 ColorOcTree(简单)
// semantic_octomap_node.cpp
#include <octomap/ColorOcTree.h>
#include <rclcpp/rclcpp.hpp>
#include <sensor_msgs/msg/point_cloud2.hpp>
#include <tf2_ros/transform_listener.h>
class SemanticOctomapNode : public rclcpp::Node {
public:
SemanticOctomapNode() : Node("semantic_octomap") {
// 初始化 ColorOcTree(分辨率 0.05m)
tree_ = std::make_shared<octomap::ColorOcTree>(0.05);
// 订阅语义点云
sub_ = this->create_subscription<sensor_msgs::msg::PointCloud2>(
"/semantic/pointcloud", 10,
std::bind(&SemanticOctomapNode::cloud_callback, this, std::placeholders::_1)
);
// TF 监听器
tf_buffer_ = std::make_shared<tf2_ros::Buffer>(this->get_clock());
tf_listener_ = std::make_shared<tf2_ros::TransformListener>(*tf_buffer_);
}
private:
void cloud_callback(const sensor_msgs::msg::PointCloud2::SharedPtr msg) {
// 获取相机到世界坐标系的变换
geometry_msgs::msg::TransformStamped transform;
try {
transform = tf_buffer_->lookupTransform(
"map", msg->header.frame_id, msg->header.stamp
);
} catch (tf2::TransformException& ex) {
RCLCPP_WARN(this->get_logger(), "TF error: %s", ex.what());
return;
}
// 解析点云并插入 OcTree
// ... 点云解析代码 ...
for (const auto& point : points) {
// 变换到世界坐标系
octomap::point3d p_world = transform_point(point, transform);
// 插入占用信息
tree_->updateNode(p_world, true);
// 设置颜色(可用语义标签映射颜色)
octomap::ColorOcTreeNode* node = tree_->search(p_world);
if (node) {
node->setColor(semantic_to_color(point.label));
}
}
}
octomap::ColorOcTreeNode::Color semantic_to_color(uint8_t label) {
// 语义类别 → 颜色映射
static const std::map<uint8_t, octomap::ColorOcTreeNode::Color> colormap = {
{0, {128, 128, 128}}, // background - 灰色
{1, {255, 0, 0}}, // person - 红色
{2, {0, 255, 0}}, // bicycle - 绿色
{3, {0, 0, 255}}, // car - 蓝色
// ...
};
auto it = colormap.find(label);
if (it != colormap.end()) {
return it->second;
}
return {255, 255, 255}; // 未知类别 - 白色
}
std::shared_ptr<octomap::ColorOcTree> tree_;
rclcpp::Subscription<sensor_msgs::msg::PointCloud2>::SharedPtr sub_;
std::shared_ptr<tf2_ros::Buffer> tf_buffer_;
std::shared_ptr<tf2_ros::TransformListener> tf_listener_;
};
方案 B:自定义 SemanticOcTree(推荐)
// semantic_octree.h
#include <octomap/OcTreeNode.h>
#include <octomap/OccupancyOcTreeBase.h>
class SemanticOcTreeNode : public octomap::OcTreeNode {
public:
// 语义标签(类别 ID)
uint8_t semantic_label = 0;
// 各类别观测计数(用于投票)
std::array<uint16_t, 32> label_counts = {0};
void updateSemanticLabel(uint8_t observed_label) {
label_counts[observed_label]++;
// 选择最高票数的类别
semantic_label = std::max_element(
label_counts.begin(), label_counts.end()
) - label_counts.begin();
}
// 序列化支持
std::istream& readData(std::istream& s) override;
std::ostream& writeData(std::ostream& s) const override;
};
class SemanticOcTree : public octomap::OccupancyOcTreeBase<SemanticOcTreeNode> {
public:
SemanticOcTree(double resolution)
: OccupancyOcTreeBase<SemanticOcTreeNode>(resolution) {
semanticOcTreeMemberInit.ensureLinking();
}
void insertSemanticPoint(const octomap::point3d& p, uint8_t label) {
SemanticOcTreeNode* node = this->updateNode(p, true);
if (node) {
node->updateSemanticLabel(label);
}
}
// 导出为 .sot 文件
bool writeBinarySemanticTree(const std::string& filename);
};
方案 C:Python + pyoctomap(快速原型)
#!/usr/bin/env python3
"""Python 版语义 OctoMap(使用 pyoctomap)"""
# pip install pyoctomap
import octomap
import numpy as np
class SemanticOctoMap:
def __init__(self, resolution=0.05):
self.tree = octomap.OcTree(resolution)
self.resolution = resolution
# 语义标签存储(体素坐标 → 标签计数)
self.semantic_votes = {}
def insert_semantic_cloud(self, points, labels, origin):
"""
插入语义点云
Args:
points: (N, 3) 世界坐标系下的点
labels: (N,) 语义标签
origin: (3,) 传感器原点(用于射线投射)
"""
# 插入占用信息
pointcloud = octomap.Pointcloud()
for p in points:
pointcloud.push_back(p[0], p[1], p[2])
self.tree.insertPointCloud(
pointcloud,
octomap.point3d(origin[0], origin[1], origin[2])
)
# 更新语义标签
for point, label in zip(points, labels):
key = self._point_to_key(point)
if key not in self.semantic_votes:
self.semantic_votes[key] = np.zeros(32, dtype=np.int32)
self.semantic_votes[key][label] += 1
def _point_to_key(self, point):
"""将点坐标转换为体素键"""
return tuple((point / self.resolution).astype(int))
def get_semantic_label(self, point):
"""获取指定位置的语义标签"""
key = self._point_to_key(point)
if key in self.semantic_votes:
return np.argmax(self.semantic_votes[key])
return 0 # 背景
def save(self, filename):
"""保存八叉树(几何部分)"""
self.tree.writeBinary(filename)
# 保存语义标签
np.savez(
filename.replace('.bt', '_semantic.npz'),
keys=list(self.semantic_votes.keys()),
votes=list(self.semantic_votes.values())
)
def export_colored_ply(self, filename):
"""导出带语义颜色的 PLY 点云"""
# 语义颜色映射
colormap = {
0: [128, 128, 128], # background
1: [255, 0, 0], # person
2: [0, 255, 0], # car
# ...
}
points = []
colors = []
for leaf in self.tree.begin_leafs():
if self.tree.isNodeOccupied(leaf):
coord = leaf.getCoordinate()
point = np.array([coord.x(), coord.y(), coord.z()])
points.append(point)
label = self.get_semantic_label(point)
colors.append(colormap.get(label, [255, 255, 255]))
# 写入 PLY 文件
with open(filename, 'w') as f:
f.write("ply\n")
f.write("format ascii 1.0\n")
f.write(f"element vertex {len(points)}\n")
f.write("property float x\n")
f.write("property float y\n")
f.write("property float z\n")
f.write("property uchar red\n")
f.write("property uchar green\n")
f.write("property uchar blue\n")
f.write("end_header\n")
for p, c in zip(points, colors):
f.write(f"{p[0]} {p[1]} {p[2]} {c[0]} {c[1]} {c[2]}\n")
性能优化技巧
GPU 加速语义分割
# Docker/Podman 中挂载 NVIDIA GPU
podman run --gpus all \
--device /dev/dri \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
your_semantic_slam_image
时间同步优化
# 使用 message_filters 进行精确同步
from message_filters import ApproximateTimeSynchronizer, TimeSynchronizer
# 近似同步(允许时间偏差)
sync = ApproximateTimeSynchronizer(
[sub_rgb, sub_depth, sub_labels],
queue_size=10,
slop=0.05 # 50ms 容差
)
# 精确同步(需要传感器硬件同步)
sync = TimeSynchronizer(
[sub_rgb, sub_depth, sub_labels],
queue_size=10
)
增量更新策略
class IncrementalSemanticMap:
def __init__(self):
self.last_pose = None
self.update_threshold_trans = 0.1 # 10cm
self.update_threshold_rot = 0.1 # ~6°
def should_update(self, current_pose):
"""判断是否需要更新地图"""
if self.last_pose is None:
return True
# 计算位移
trans_diff = np.linalg.norm(
current_pose[:3, 3] - self.last_pose[:3, 3]
)
# 计算旋转差异
R_diff = current_pose[:3, :3] @ self.last_pose[:3, :3].T
rot_diff = np.arccos(np.clip((np.trace(R_diff) - 1) / 2, -1, 1))
return trans_diff > self.update_threshold_trans or \
rot_diff > self.update_threshold_rot
体素降采样
import open3d as o3d
def voxel_downsample(points, labels, voxel_size=0.02):
"""体素降采样,保留主要语义标签"""
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(points)
# 降采样
pcd_down = pcd.voxel_down_sample(voxel_size)
# 为每个体素计算主要标签
# ... 标签聚合逻辑 ...
return np.asarray(pcd_down.points), downsampled_labels
参考资源
官方文档与教程
-
RTAB-Map Semantic SLAM 示例
https://github.com/introlab/rtabmap_ros/blob/ros2-devel/launch/semantic_mapping.launch.py -
RealSense + RTAB-Map 教程
https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-RTAB-Map -
OctoMap 官方示例(含 ColorOcTree)
https://github.com/OctoMap/octomap/tree/devel/octomap/src
开源项目
-
semantic_octree(ETH Zurich)
https://github.com/ethz-asl/semantic_octree -
Kimera-Semantics(MIT)
https://github.com/MIT-SPARK/Kimera-Semantics
论文
-
Semantic Octree: Dense Semantic Mapping for 3D Environments
https://ieeexplore.ieee.org/document/8768479 -
Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping
https://arxiv.org/abs/1910.02490
🔗 相关链接
- [[ROS2 Humble + OpenVINS 源码编译]]
- [[RealSense D455 标定替代方案]]
- [[RTAB-Map 配置详解]]
- [[YOLOv8 语义分割教程]]
🔄 更新日志
- 2025-12-22: 创建初始版本,涵盖完整语义 SLAM 流程
最后更新: 2025-12-22
维护者: Jesse