LiDAR SLAM V1 – No FOV

While reading about image processing challenges, something I kept reading about was SLAM: Simultaneous Location and Mapping. The goal is to “map of an unknown environment while simultaneously keeping track of an agent’s location within it,” from Wikipedia. These projects are regularly done with a depth sensing camera, so I purchased a Kinect for Xbox One, and loaded up the SDK.

After reading the SDK and setting up CMAKE, I recorded a video of my apartment while pushing around the Kinect V2. I recorded the color and depth video to then generate a map of my apartment.

My project reads the depth video 1 frame at a time. All features are logged and looked for in the next frame. If a feature is determined to be the same as the previous frame, then the feature is not updated on the map. If a feature appears to be new, then it is drawn on the map based on triangulation of features that are in the current and previous frame. To be more specific, a feature has a minimum size and features near the edge are not carried over multiple frames due to not being able to see the edge, which would provide inaccurate triangulation.

There are limitations to this style of SLAM. An obvious limitation is that features are not allowed to be ‘behind’ other features in a single frame. For example a wall behind a desk leg. This is intentional to keep version 1 simple by limiting each column to have only 1 pixel. This causes an array of issues, but still allows for a good version 1 in tracking features across frames. Lack of FOV and other issues will be resolved in v2.

My code is at my GitHub

Identify Canine Coccidiosis with Deep Learning

I was looking for another image dataset on Kaggle to continue to improve my Deep Learning knowledge and found images of Canine Coccidiosis.

The data included pictures of Canine Coccidiosis and labels highlighting the locations of the Coccidiosis. I found this dataset to be noticeably different from the Satellite image I previously reviewed due primarily to the lack of color differences between images. There area some tint color differences, but those differences are limited compared to the cloudiness that appeared in the satellite images.

My strategy for identify positive samples was to isolate 4 close up images for each Coccidiosis example from different centered offsets (Cyan Boxes). For negative samples, I ran an adaptive threshold on each image (Left Image) to highlight areas that appeared similar to Coccidiosis (Purple Boxes) .

I fit the samples into a Neural Network in TensorFlow. Then ran a screen over test images, with step size being half the size of the screen. My positive results (Green Box left image) are somewhat more refined than labeled data due to the rectangular data they give for a round protozoa. Bad guesses are represented as Red Boxes in the left screen, and Yellow in the right screen. Red boxes in the right screen are non-guesses, though are typically overlapping with positive guesses. My code on GitHub is here.

Ships on Ocean Detection 1st Attempt

Kaggle was having an Image Challenge and I decided to try my hand at it. The challenge is to identify ships on the ocean from satellite images.

My first attempt involved blurring the training images. Attempting to filter large parts of the ocean. Then logging parts of the image for kNN usage. And noting if the image piece was contained a piece of the ship. To attempt to reduce training set size, the initial image piece was larger that what was reasonable for reporting back as true or false. My strategy was to take a smaller piece of the identified image and log that part as well.

This first attempt turned out well, and I plan to make another attempt or 2. My code on GitHub.

Steps listed below

  1. Original Image with Training Data
  2. Blur and filter image
  3. Identify larger pieces (32×32 pixels) based on training data. (Thinner lines are False, thicker lines are True)
  4. Identify smaller pieces (4×4 pixels) based on training data. (Thinner lines are False, thicker lines are True)
  5. Apply data to knn
  6. Review test image
  7. Get result of larger images
  8. Get result of smaller images
  9. Generate data to submit
  10. Comparison of Prediction and Test data

Starting back with OpenCV

It has been a few years since my last OpenCV video…way too long. Eventual goals include thorough object detection, stereo camera usage, and much more.

First video back is a ‘3D cube’ traced onto a checkerboard with hotkeys for cube size. The height of the cube is jittery, especially when the board is not facing the camera, but overall good for first video. Frame rate needs some work, especially when there is no checkerboard detected.

My code is on GitHub.

Starting again with AWS

Starting on side projects again, so I am going to document them here at my easy-to-use ‘new’ blog. Thanks WordPress and other blogging tools.

I’m going to go through a variety of AWS tools and start with a Hello World test and then a more realistic test, maybe a stress test.
After a few tools are logged I’ll try to use them in conjunction, and eventually get to a more real project. Probably something with sports analytics, but I’ll see how I’m feeling after working with a few of the tools.

Two of the tools I’ve worked with are EC2 and Elastic Beanstalk, the 2 ‘compute’ tools. EC2 I used a while ago and it was definitely for more hardcore ops people. I got it up and running, but Elastic Beanstalk is easier after working with it a bit. Below is my first Hello World project. It is very simple code basically copied from a REST client and server example.

https://github.com/JasonFaas/AWS_ElasticBeanstalk_HelloWorld

  1. Run client test (Failing Test)
  2. Generate a WAR with Maven from the code
  3. Create an Elastic Beanstalk server with Tomcat
  4. Upload the WAR
  5. Modify Client test with server address
  6. Run client test (Passing Test)