Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Colour sortex machine circuit design

Status
Not open for further replies.

Ankit1989

New Member
Almond colour sortex.... hello everyone I want to develop almond colour sortex please guide me what kind of controlling board I should used for it and which type of sensor should I use for I mean I sense damaged almond which sensor should I use ... can I use infrared Camara or normal Camara for and ... and in which should I use programing.
 
You should be able to use a normal but good quality [HD] "webcam" style camera, with a PC as a controller.

The hard part is the image recognition system.
This looks to be a good starting point: **broken link removed**

You need to get that running and then "train" it using many images containing the different classes of nut (colours and damaged etc.) until the recognition level is acceptable.
 
Should I use raspberry pi for image recognition and Arduino for servo controller?? .. if not then which is other suitable board for this project ?
 
Start out with it on a normal PC, a reasonably powerful one.

You can always try less powerful boards like the Pi once you have the basics of the system working. I think trying to work from the start on a Pi would be much harder.
 
Dunning-Kruger? ;)

**broken link removed** Those are some very sophisticated machines. Very impressive and I wonder what they cost.

You may want to consider looking into a Jetson Nano and TensorFlow software as a start to the identification part.

I am suggesting this as I recently purchased one and am happily wading into the ocean, so to speak, but am nowhere near the point where I could even approach doing something like that.
 
From looking at the video; I can not tell where the camera is. I think the camera moves with the pick up arm.
I would design on the Pi but I see why a PC might be easy.
A servo board not a Arduino for servo controller but it does not matter. A PC probably can not do servos directly so unloading time critical jobs to a second board it good. (I have USB and serial servo boards)
The Pi has cameras that plug in. Can also use USB cameras.
I don't think you need a IR-camera. You can do a test to see if that works better. I have a old video recorder that has RGB or IR in the camera. You could take a picture in normal mode and then in IR mode and see which has the best contrast. (good object/bad object)
----edited----
Now I see a strip of light to the bottom right of the video. That is where the nuts are inspected. So the camera finds the Right/Left address for the nut and two seconds later the nut reaches the pick arm. I can't tell if that is all there is or if there is a second camera at the arm.
----edited----
119346

read this
 
Last edited:
Not to just be argumentative, but I will stay with my suggestion of the Jetson Nano over a Rasperry Pi for the task at hand, although the Pi 4 is in the same ballpark.. I came to that conclusion through reading and deciding which to get and not from the experience of developing any almond screening applications.

The Jetson is built precisely for computer vision and so-called deep learning tasks. tasks and the software to implement such tasks is, as far as I can tell, will run on a Nano. Of course a Pi camera is suggested and the Jetson Nano also contains a Pi-compatible GPIO layout. The difference, again, from my limited perspective is the compatibility of the AI software.

Watch this entertaining demo, for example.
 
The Jetson is built precisely for computer vision and so-called deep learning tasks

I agree fully, I'd only recently heard of the Jetson and forgotten about that..

The image recognitions software link I gave earlier uses tensorflow; that does seem to be one of the best recognition choices at present.
 
How to find location of damage almond from real time video .... which language is easy to for image processing... ... I need to find 2 - d location right because depth of arm is fixed every time ... can I make table from pixel to width of conveyor belt ... to find location of particular damage almond ...


if yes then please send me video link for that ... because in every video on you tube there is only image recognition program ... there is not any program to find location of an camera out put , which is stady and located on specific distance .
 
I agree fully, I'd only recently heard of the Jetson and forgotten about that..

The image recognitions software link I gave earlier uses tensorflow; that does seem to be one of the best recognition choices at present.

Just as a side note: The more amazing part of that video to me (not necessarily to you or anyone else) is not the identification of a bad almond. While that is by no means trivial, if I were an almond producer, I think that I could definitely come up with 1000s of bad almond pictures and good almond pictures to use for a classification system.

The amazing part is to find the bad almond, pick it up, move it out and get back there to find the next one before the roller has moved too far. Mechanically that level of precision (and repeatabilyt - assuming it does not break down every few minutes) just seems much more amazing to me.
 
How to find location of damage almond from real time video .... which language is easy to for image processing... ... I need to find 2 - d location right because depth of arm is fixed every time ... can I make table from pixel to width of conveyor belt ... to find location of particular damage almond ...


if yes then please send me video link for that ... because in every video on you tube there is only image recognition program ... there is not any program to find location of an camera out put , which is stady and located on specific distance .

I don't know as you can just find out how to do that from a video link - at least I can't provide one for you. A thorough search of computer vision and deep learning would be somewhere to start. Sorry.
 
If I want complete circuit and code and 3 d model of machine .... is there any one who can do it for me .. I can pay for that
 
While that is by no means trivial, if I were an almond producer, I think that I could definitely come up with 1000s of bad almond pictures and good almond pictures to use for a classification system.

Also if you were an almond producer you wouldn't design and make your own. While not having seen them sorting almonds, most sorting of grain, beans or even potato chips is done by "Vision" and an air stream blows the bad item out while they are in free fall between two conveyors. Using an arm to pick the bad ones out would be really slow.

This seems to be a school assignment not a real build.
 
Also if you were an almond producer you wouldn't design and make your own. While not having seen them sorting almonds, most sorting of grain, beans or even potato chips is done by "Vision" and an air stream blows the bad item out while they are in free fall between two conveyors. Using an arm to pick the bad ones out would be really slow.

This seems to be a school assignment not a real build.

If I am seeing correctly, in the video it looks like they are sucking the bad almonds up through that hose...to be used in something else I suppose.

What do you think that machine costs? Over 100K?
 
I see two very different choices.
1) Use Vision Neural Net software, where you can find an elephant in the middle of your nuts. lol .. You will need all the computer you can get.
2) It is clear the inside of the nuts is a very different color (color or IR). Just finding a "inside" color of 1cm should not be hard to do with a small computer. Software needs only to return the center of the nut address.

Moving the arm (calibration). When we did this; place a single nut, or a picture of a nut, on the belt. Take the picture. Manually move the head to the nut. Make a lookup table. Pixel 1000 = here, Pixel 1500 = here, etc. I think 5 to 10 nuts give enough information. It takes very little software to look up the table and know that if the nut is at pixel 1205 then you know exactly where to move the head.
 
Watching that video a few times more makes me think that they don't really need to calculate where the camera is at because the hose is a fixed distance from the camera all the time, or so it appears. Assuming there is a small and angled 'tip' on the hose such that no offset is required (that is, it is aiming directly at wherever camera center is at), it may be that when the camera spots a bad almond, it simply centers and fires the hose. If I am right, that is a pretty clever time saver.

Edit: What does the metal piece do (blue arrow)? Do those two sides separate to allow sucking or is it holding back good almonds or ?
119355


edited again: N/m we are not seeing the camera and I thought it was mounted on the picker with some kind of wide angle lens...and that is not even close. This video gives a better explanation.

 
Last edited:
This video gives a better explanation.
I grabbed a picture just before it captured a nut.
119356

The next frame the nut is sucked into a pipe.

Watch the video from 148 to 149 the head moves forward. (when there is only one nut to grab)
What does the metal piece do
Video at 143 there is a second in slow motion. You can see the metal piece(s) move.
 
Last edited:
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top