Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to finding contours in image #49

Open
freekoy opened this issue May 18, 2019 · 17 comments
Open

How to finding contours in image #49

freekoy opened this issue May 18, 2019 · 17 comments

Comments

@freekoy
Copy link

freekoy commented May 18, 2019

rt

@WorldThirteen
Copy link
Contributor

Hi, thanks for the issue!
We have an example of CannyEdges operation which finds contours in the image: https://gammacv.com/examples/canny_edges

The example with the code you are able to find on CodePen: https://codepen.io/WorldThirteen/pen/wONzjL
See our GetStarted to understand the example's code.

@freekoy
Copy link
Author

freekoy commented May 18, 2019 via email

@WorldThirteen
Copy link
Contributor

No, we have no such function yet and have no plans to implement it in the nearest future. But we are open for the contribution.
So if you want to see this functionality, please describe it in details, provide an example.

@freekoy
Copy link
Author

freekoy commented May 19, 2019

Thanks;
like opencv.js
link: https://docs.opencv.org/3.4/d3/dc0/group__imgproc__shape.html#ga1a539e8db2135af2566103705d7a5722
https://docs.opencv.org/3.4/d8/d1c/tutorial_js_contours_more_functions.html
2. Point Polygon Test
This function finds the shortest distance between a point in the image and a contour. It returns the distance which is negative when point is outside the contour, positive when point is inside and zero if point is on the contour.

We use the function: cv.pointPolygonTest (contour, pt, measureDist)

Mainly this
The function determines whether the point is inside a contour,

@crapthings
Copy link

possible to find biggest contour in current implement ?

image

i want to use this to find paper edge and do perspective transform

@apilguk
Copy link
Contributor

apilguk commented Jun 4, 2019

Thank you guys for interest and feedback. Currently GammaCV support only edges detection and hasn't support algorithms for contour segmentation, but we have a plans to add it in future. If you would like to help with it welcome to contribute.

@apilguk
Copy link
Contributor

apilguk commented Jun 4, 2019

@crapthings dependent on case you have, if you looking for a contour of squared area you can detect edges then using Hough transform extract lines and then using heuristic approach try to find object contour you interested in

@ramiel
Copy link

ramiel commented Jul 13, 2019

The documentation on pcLines is not very complete. There's no example that tell the name of the function for example. Is it pcLInes or pcLinesTransform? Once we apply the pcLines, how can we get the coordinates of those lines? How to remove part of the images outside a square (cut the image)?

@rmhrisk
Copy link
Contributor

rmhrisk commented Jul 13, 2019

Happy to take a PR. A PClines example is provided.

@WorldThirteen
Copy link
Contributor

WorldThirteen commented Jul 14, 2019

@ramiel, thanks for pointing on this documentation issue. This operation is available by gm.pcLines. The source code of the examples that we hosted on GammaCV cite: PCLines are available here
The example consists of two parts:

  1. setup operation pipeline part
  2. tick part which invokes at each frame.

@ramiel
Copy link

ramiel commented Jul 15, 2019

Thank you @WorldThirteen . As you can see from my other issue I already found the example code :)

@mirelz-lalith
Copy link

Can you please post the working example for pclines here?

@rmhrisk
Copy link
Contributor

rmhrisk commented Jun 9, 2020

@mirelz-lalith
Copy link

Sorry if the question sounds very naive as I am very new to js. But how do we pass the input to each of these function and run them?

@rmhrisk
Copy link
Contributor

rmhrisk commented Jun 9, 2020

@mirelz-lalith
Copy link

mirelz-lalith commented Jun 9, 2020

Thanks for the quick reply @rmhrisk. I have tried to integrate the sample code with that mentioned in the get started. Here is the code:

var params = newFunction()
function newFunction() {
    return {
        PROCESSING: {
            name: 'PROCESSING',
            dCoef: {
                name: 'Downsample',
                type: 'constant',
                min: 1,
                max: 4,
                step: 1,
                default: 2,
            },
        },
        PCLINES: {
            name: 'PC LINES',
            count: {
                name: 'Lines Count',
                type: 'uniform',
                min: 1,
                max: 100,
                step: 1,
                default: 10,
            },
            layers: {
                name: 'Layers Count',
                type: 'constant',
                min: 1,
                max: 5,
                step: 1,
                default: 2,
            },
        },
    };
};

const width = 500;
const heigth = 400;
// initialize WebRTC stream and session for runing operations on GPU
const stream = new gm.CaptureVideo(width, heigth);
const sess = new gm.Session();
const canvasProcessed = gm.canvasCreate(width, heigth);

// session uses a context for optimize calculations and prevent recalculations
// context actually a number which help algorythm to run operation efficiently  
let context = 0;
// allocate memeory for storing a frame and calculations output
const input = new gm.Tensor('uint8', [heigth, width, 4]);
{ line: new gm.Line() }
// construct operation grap which is actially a Canny Edge Detector
let pipeline = gm.grayscale(input);
pipeline = gm.downsample(pipeline, 2, 'max');
pipeline = gm.gaussianBlur(pipeline, 3, 1);
pipeline = gm.dilate(pipeline, [3, 3]);
pipeline = gm.sobelOperator(pipeline);
pipeline = gm.cannyEdges(pipeline, 0.25, 0.75);
pipeline = gm.pcLinesTransform(pipeline, 3, 2, 2);

// initialize graph
sess.init(pipeline);

// allocate output
const output = gm.tensorFrom(pipeline);

// create loop
const tick = () => {
    requestAnimationFrame(tick);
    // Read current in to the tensor
    stream.getImageBuffer(input);
    //
    const maxP = Math.max(input.shape[0], input.shape[1]);
    let lines = [];

    // session.runOp(operation, frame, output);
    sess.runOp(pipeline, context, output);
    gm.canvasFromTensor(canvasProcessed, input);


    for (let i = 0; i < output.size / 4; i += 1) {
        const y = ~~(i / output.shape[1]);
        const x = i - (y * output.shape[1]);
        const value = output.get(y, x, 0);
        const x0 = output.get(y, x, 1);
        const y0 = output.get(y, x, 2);

        if (value > 0.0) {
            lines.push([value, x0, y0]);
        }
    }

    lines = lines.sort((b, a) => a[0] - b[0]);
    console.log(lines.length)
    lines = lines.slice(0, 10);
    // console.log(lines)
    for (let i = 0; i < lines.length; i += 1) {
        context.line.fromParallelCoords(
            lines[i][1] * params.PROCESSING.dCoef, lines[i][2] * params.PROCESSING.dCoef,
            input.shape[1], input.shape[0], maxP, maxP / 2,
        );

        gm.canvasDrawLine(canvasProcessed, context.line, 'rgba(0, 255, 0, 1.0)');
    }

    context += 1;
}

function main() {
    // start capturing a camera and run loop
    stream.start();
    tick();

    document.body.children[0].appendChild(canvasProcessed);
}
main()```

But I am getting this error:

```main.js:96 Uncaught TypeError: Cannot read property 'fromParallelCoords' of undefined
    at tick (main.js:96)```

Can anyone help me with this?

@WorldThirteen
Copy link
Contributor

You have a line { line: new gm.Line() }, which is object declaration without assigning to a variable.
Since below in code you use context.line, you need to replace { line: new gm.Line() } with

const context = { line: new gm.Line() };

I mean:

var params = newFunction()
function newFunction() {
    return {
        PROCESSING: {
            name: 'PROCESSING',
            dCoef: {
                name: 'Downsample',
                type: 'constant',
                min: 1,
                max: 4,
                step: 1,
                default: 2,
            },
        },
        PCLINES: {
            name: 'PC LINES',
            count: {
                name: 'Lines Count',
                type: 'uniform',
                min: 1,
                max: 100,
                step: 1,
                default: 10,
            },
            layers: {
                name: 'Layers Count',
                type: 'constant',
                min: 1,
                max: 5,
                step: 1,
                default: 2,
            },
        },
    };
};

const width = 500;
const heigth = 400;
// initialize WebRTC stream and session for runing operations on GPU
const stream = new gm.CaptureVideo(width, heigth);
const sess = new gm.Session();
const canvasProcessed = gm.canvasCreate(width, heigth);

// session uses a context for optimize calculations and prevent recalculations
// context actually a number which help algorythm to run operation efficiently  
let context = 0;
// allocate memeory for storing a frame and calculations output
const input = new gm.Tensor('uint8', [heigth, width, 4]);
const context = { line: new gm.Line(); };
// construct operation grap which is actially a Canny Edge Detector
let pipeline = gm.grayscale(input);
pipeline = gm.downsample(pipeline, 2, 'max');
pipeline = gm.gaussianBlur(pipeline, 3, 1);
pipeline = gm.dilate(pipeline, [3, 3]);
pipeline = gm.sobelOperator(pipeline);
pipeline = gm.cannyEdges(pipeline, 0.25, 0.75);
pipeline = gm.pcLinesTransform(pipeline, 3, 2, 2);

// initialize graph
sess.init(pipeline);

// allocate output
const output = gm.tensorFrom(pipeline);

// create loop
const tick = () => {
    requestAnimationFrame(tick);
    // Read current in to the tensor
    stream.getImageBuffer(input);
    //
    const maxP = Math.max(input.shape[0], input.shape[1]);
    let lines = [];

    // session.runOp(operation, frame, output);
    sess.runOp(pipeline, context, output);
    gm.canvasFromTensor(canvasProcessed, input);


    for (let i = 0; i < output.size / 4; i += 1) {
        const y = ~~(i / output.shape[1]);
        const x = i - (y * output.shape[1]);
        const value = output.get(y, x, 0);
        const x0 = output.get(y, x, 1);
        const y0 = output.get(y, x, 2);

        if (value > 0.0) {
            lines.push([value, x0, y0]);
        }
    }

    lines = lines.sort((b, a) => a[0] - b[0]);
    console.log(lines.length)
    lines = lines.slice(0, 10);
    // console.log(lines)
    for (let i = 0; i < lines.length; i += 1) {
        context.line.fromParallelCoords(
            lines[i][1] * params.PROCESSING.dCoef, lines[i][2] * params.PROCESSING.dCoef,
            input.shape[1], input.shape[0], maxP, maxP / 2,
        );

        gm.canvasDrawLine(canvasProcessed, context.line, 'rgba(0, 255, 0, 1.0)');
    }

    context += 1;
}

function main() {
    // start capturing a camera and run loop
    stream.start();
    tick();

    document.body.children[0].appendChild(canvasProcessed);
}
main()

Note: I haven't checked if this code has other errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants