:: AForge.NET Framework :: Articles :: Forums ::

## New Quadrilateral Points From Glyph's TransformationMatrix

Forum to discuss Glyph Recognition And Tracking Framework project, its features, applications, etc.

### New Quadrilateral Points From Glyph's TransformationMatrix

Edit: Better title....I think.

I have been able to get glyph detection working. I process an image, and then save the following information for later use:
Source Image with qlyph on it
Four points of the recognized glyph
Rotation of the glyph
TransformationMatrix from the detected glyph
Width of the glyph
X and Y coordinate of the center of the detected glyph

Now what I need to do is find the four points that would represent a square/rectangle of any dimensions, after having the same three dimensional transformations applied to it (i.e. what it would look like if it in the same 3D orientation as the glyph). So, for example, what would the coordinates of the four corners of a rectangle 3 feet wide by 2 feet tall be if it were in the same orientation as the glyph in the source image?

After reading through http://www.aforgenet.com/articles/posit/, I thought maybe I would be able to use the TransformationMatrix of the detected glyph against a "model" that represented my second object I want to transform, passing both into a function like this:

Code: Select all
`private AForge.Point[] PerformProjection( Vector3[] model, Matrix4x4 transformationMatrix, int viewSize ){    AForge.Point[] projectedPoints = new AForge.Point[model.Length];    for ( int i = 0; i < model.Length; i++ )    {        Vector3 scenePoint = ( transformationMatrix *            model[i].ToVector4( ) ).ToVector3( );        projectedPoints[i] = new AForge.Point(            (int) ( scenePoint.X / scenePoint.Z * viewSize ),            (int) ( scenePoint.Y / scenePoint.Z * viewSize ) );    }    return projectedPoints;}`

I built my new "object" over the center of the glyph, using X and Y coordinates, but no Z coordinates like so:
Code: Select all
`//origin in center of glyphvar artworkModel = new Vector3[]{   new Vector3(centerX - (objectWidthPx / 2), centerY - (objectHeightPx / 2), 0),  //top left   new Vector3(centerX + (objectWidthPx / 2), centerY - (objectHeightPx / 2), 0),  //top right   new Vector3(centerX - (objectWidthPx / 2), centerY + (objectHeightPx / 2), 0),  //bottom left   new Vector3(centerX + (objectWidthPx / 2), centerY + (objectHeightPx / 2), 0),  //bottom right};`

The values I got out of that were waaaay off, so clearly I'm doing it completely wrong. I've spent many hours trying to figure out how to do this, but as of yet have come up with nothing that even comes close to making sense as a result.
FirstDivision

Posts: 1
Joined: Mon Feb 09, 2015 6:14 pm