:: AForge.NET Framework :: Articles :: Forums ::

Utilizing HoughLine output for Machine learning

The forum is to discuss topics related to different areas of image processing and computer vision.

Utilizing HoughLine output for Machine learning

Postby Skyehawk » Thu Dec 01, 2016 1:28 am

Hello All,
I am a University student working on 3D Printing Quality Analysis utilizing Aforge for most vision related tasks. This is an extensive project covering many different areas. However, my questions are with regard to HoughLine Transformations and their ability to accurately jump gaps and other 'anomalies'. I want to utilize this output for an accurate way to train an A.N.N to recognize general errors that are encountered in the consumer 3D printer environment.

The Project:
Let's work with a generalized case to help with understanding. The first attached image are the input to the system. These images are then cleaned up, filtered and compared to create an image like the second attachment. Blob detection and extraction are then run. Edge detection (canny) and skeletonization are then run on the individual blobs before being passed to the HoughLine Transformation as seen in attachment three.

input-min.PNG (101.91 KiB) Viewed 5310 times

output1.PNG (50.7 KiB) Viewed 5310 times

output2.PNG (91.77 KiB) Viewed 5310 times

The Problem:
I want to utilize the output of the HoughLine transformation to define individual errors. I plan to sort the errors manually into separate folders as a starting point.
Let's look at more examples to help define the problem:
There is a nasty error in 3D printing called Z-axis wobble this error is a good candidate for HoughLine, there should be (perhaps after some morphology filter application) two 'dashed' lines running parallel to each other up the side of the part. If I can use these two parallel lines to flag a 'wobble error' in the code perfect.
There is another error called warping, while this disrupts straight lines, it also creates a large radius that may also be detectable (HoughCircle) or another algorithm. Note: I have thought of just passing the extracted blob (Flagged Error) to the A.N.N., but I feel that some basic classification and included geometric features (lines, quadrilaterals, and circles) will aid with the overall success of the project.

The Question:
I wish to pass this info to a basic A.N.N. I am not at that part of the project yet so I do not have specifics. However, I want to set myself up for success here so: How should I determine parallel-enough lines (keeping in mind there may not be extensive data points on the aforementioned lines)? Any known algorithms that detect large radius curves (center point of the curve may be off the image)? Any other general comments, questions, or simpler ways to 'flag' and classify these errors for machine learning?

Thank you in advance for the help :D ,
Skye Leake
Posts: 1
Joined: Thu Dec 01, 2016 12:21 am

Return to Image Processing and Computer Vision