torekp Posted July 19, 2007 Report Share Posted July 19, 2007 I'm designing an algorithm to tackle my problem, but I have the feeling I'm re-inventing the wheel. Not only that, but my wheel feels kinda square, and I'm hoping there's a round one out there, that you know about. I've got a moving belt with parts randomly scattered on it. The parts are NOT singulated: if you draw a line across the belt, chances are that you'll find one or two parts there, and occasionally more. Some parts are bigger than others. There is a camera near the feed end of the belt, which identifies the locations of the parts. Further down, a traveling sensor (which travels only across the width of the belt: call this the X-direction in a Cartesian coordinate system) visits the parts for a more detailed inspection. The sensor travels fast enough that there is not much worry about going from one side of the belt to the other. I thought that would be a problem, and asked for advice on it in the past, but it's not a big issue, as long as the sensor is allowed to have some settling time after each jump. However, the sensor can only visit one part at a time. It must divvy up its visiting-time between the various parts that occupy the same line. I want to optimize the accuracy of the analyses that the sensor gives, and the accuracy is proportional to something like the square root of the amount of time spent on each part. Therefore - I think, correct me if I'm wrong - I want to make the time spent on each part as nearly "fair" (equal) as possible, while of course spending "more than fair" time on a particle if it's the only one around. I don't want the calculation to take much processor time. I've got an idea how to do it, but let me not bias your thinking yet. Bear in mind that it's possible to encounter clumps of parts, perhaps five or more side by side. It's also possible, and common, for a densely populated area of the belt to go on for some length, so that perhaps 50 parts or so go by before the sensor gets a "breather" (brief empty stretch of belt). The camera takes data in "frames", which are approximately square areas of belt, and the sensor path calculation is independent for each frame. If a part straddles two frames, the Vision software will pretend that only the larger half exists, so I never need to worry about any stretch of belt longer than a single frame. The camera and Vision software make it natural to mentally divide the belt into a number of thin lines. Approximately one line goes by per millisecond, and we can think of the sensor as being able to occupy any X-location during each millisecond, although it would not be wise to jump around too much from part to part. Each part should be dwelled upon for a while, then go to the next. Quote Link to comment
Kevin P Posted July 19, 2007 Report Share Posted July 19, 2007 Don't have an answer here, just some questions in hopes that they may be useful mental prods for someone. 1. Based on the earlier camera, I assume you've got X-Y boundaries pretty well defined for the expected parts. Is this right? 2. At the time of scheduling your X-axis sensor, are you able to know/predict how much Y movement occurs for any given delta X? I.e., you know the speed ratio of belt and sensor? 3. What's the nature of the detailed inspection? Relative to the original X-Y boundaries of the part, does the detailed sensor produce a single data point of measurement, a small linear image vector, or a small XY image array? Or is it a case where your inspection measurement improves as you allow the sensor more time to collect? 4. Are all objects equally time-consuming to inspect? Do you need more points for larger objects? Is it preferable to inspect from a position near the centroid of an object or does it not matter? The way I'm thinking, you need to somehow consider the both the Y-extent of each of the objects you'd like to visit and inspect, and the delta-X proximity from the sensors most recent position. Some priority must be given to objects that will soon move beyond the sensor. So you've got some sort of path-generation problem where you must land on certain points within objects using little line segments. The line segments are constrained to be either purely in y (hold sensor stationary as belt moves by), or diagonal with constant slope (based on constant belt speed divided by maximum sensor movement speed). I don't know the field, but I'd also guess there are some algorithms out there that do this kind of thing if your X,Y targets are fixed (centroids). It may be tougher to optimize if you physically cannot hit all targets and you must make decisions about which and how many targets to miss. It also may be tougher if you try to consider landing anywhere within an object's X,Y boundaries rather than specifically targeting a single point such as its centroid. -Kevin P. Quote Link to comment
torekp Posted July 19, 2007 Author Report Share Posted July 19, 2007 QUOTE(Kevin P @ Jul 18 2007, 01:41 PM) Don't have an answer here, just some questions in hopes that they may be useful mental prods for someone.-Kevin P. 1. Right, the X-Y boundaries are pretty well defined. 2. The speed ratio of belt and sensor is known, and max sensor speed >> belt speed. 3. The sensor produces a small XY image array, where "resolution" improves as you allow the sensor more time to collect. 4. Objects are as time-consuming to inspect as I want them to be, so to speak. It's probably more important to get large-area objects well-inspected than small-area ones, but I haven't gotten far enough to worry about that yet. It probably is preferable, to a modest degree, to inspect from a position near the centroid of an object - or rather, not too close to the edge. I've decided that delta-X proximity from the sensors most recent position is not too important. The sensor is fast enough to get where it needs to go, as long as it can settle there for a little while after arriving. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.