{"id":104,"date":"2018-12-19T15:18:42","date_gmt":"2018-12-19T23:18:42","guid":{"rendered":"http:\/\/jasonfaas.net\/?p=104"},"modified":"2018-12-19T15:36:53","modified_gmt":"2018-12-19T23:36:53","slug":"lidar-slam-v1-no-fov","status":"publish","type":"post","link":"https:\/\/jasonfaas.net\/?p=104","title":{"rendered":"LiDAR SLAM V1 &#8211; No FOV"},"content":{"rendered":"\n<p>While reading about image processing challenges, something I kept reading about was SLAM: Simultaneous Location and Mapping. The goal is to &#8220;map of an unknown environment while simultaneously keeping track of an agent&#8217;s location within it,&#8221; from <a href=\"https:\/\/en.wikipedia.org\/wiki\/Simultaneous_localization_and_mapping\">Wikipedia<\/a>. These projects are regularly done with a depth sensing camera, so I purchased a Kinect for Xbox One, and loaded up the SDK.<br><br>After reading the SDK and setting up CMAKE, I recorded a video of my apartment while pushing around the Kinect V2. I recorded the color and depth video to then generate a map of my apartment.<br><\/p>\n\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<div class=\"video-embed-container\"><iframe loading=\"lazy\" title=\"SLAM Apartment V1\" width=\"588\" height=\"331\" src=\"https:\/\/www.youtube.com\/embed\/PjNg8KsHBoc?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/div>\n<\/div><\/figure>\n\n\n\n<p>My project reads the depth video 1 frame at a time. All features are logged and looked for in the next frame. If a feature is determined to be the same as the previous frame, then the feature is not updated on the map. If a feature appears to be new, then it is drawn on the map based on triangulation of features that are in the current and previous frame. To be more specific, a feature has a minimum size and features near the edge are not carried over multiple frames due to not being able to see the edge, which would provide inaccurate triangulation.<br><\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/jasonfaas.net\/wp-content\/uploads\/2018\/12\/Webp.net-gifmaker.gif\" alt=\"\"\/><\/figure>\n\n\n\n<p>There are limitations to this style of SLAM. An obvious limitation is that features are not allowed to be &#8216;behind&#8217; other features in a single frame. For example a wall behind a desk leg. This is intentional to keep version 1 simple by limiting each column to have only 1 pixel. This causes an array of issues, but still allows for a good version 1 in tracking features across frames. Lack of FOV and other issues will be resolved in v2.<\/p>\n\n\n\n<p>My code is at my <a href=\"https:\/\/github.com\/JasonFaas\/lidar-slam-dunk\">GitHub<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>While reading about image processing challenges, something I kept reading about was SLAM: Simultaneous Location and Mapping. The goal is to &#8220;map of an unknown environment while simultaneously keeping track &hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[3,2],"tags":[],"class_list":["post-104","post","type-post","status-publish","format-standard","hentry","category-opencv","category-software-development"],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/jasonfaas.net\/index.php?rest_route=\/wp\/v2\/posts\/104","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jasonfaas.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jasonfaas.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jasonfaas.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jasonfaas.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=104"}],"version-history":[{"count":6,"href":"https:\/\/jasonfaas.net\/index.php?rest_route=\/wp\/v2\/posts\/104\/revisions"}],"predecessor-version":[{"id":121,"href":"https:\/\/jasonfaas.net\/index.php?rest_route=\/wp\/v2\/posts\/104\/revisions\/121"}],"wp:attachment":[{"href":"https:\/\/jasonfaas.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=104"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jasonfaas.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=104"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jasonfaas.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=104"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}