Jump to content

ensegre

Members
  • Posts

    550
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by ensegre

  1. I was considering that the normal OS behaviour, for clicks outside a window in foreground, in a windowed OS, is to bring to foreground another application. In such case LV keeps running as a background application, having no notion of concurrent processes and composting of windows assigned to them by the OS. So no wonder that LV itself has no notion of "my window is not anymore in foreground", but only, at best, of "Is the front panel of this VI the frontmost among all LV windows". So trapping of external mouse clicks might be possible only through OS calls, polling the focus assigned to application windows by the OS. What is the UI effect sought, though? If it is preventing or trapping outside clicks, what about workarounds like a) make the said FP modal, b) create a dummy transparent VI (not available in linux, I fear) with a FP devoid of any toolbar and scrollbar, which fills the screen for the sole purpose of trapping clicks?
  2. Again, refer to the examples above, just give the same color to each ActivePlot. A strategy could be not to delete plots, just to replace data with NaN to keep the number of plots constant. Another may be to set the Plot.Visible? property to false individually for the plots to be hidden, but having to do that in a loop may be slow too. Try.
  3. Enforcing the plot color as was already shown above, and commented as being somewhat slow. To the point that a better strategy could be not to change the colors but to cycle the data.
  4. If python-like indexing is implied, the OP is probably wrong. $ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> a=[7,6,5,4,3,2,1] >>> a[2:] [5, 4, 3, 2, 1] >>> a[:3] [7, 6, 5] >>> a[-1] 1 >>> a[4] 3 >>> a[-4:-2] [4, 3] https://docs.python.org/3/tutorial/introduction.html#lists https://docs.python.org/3/tutorial/introduction.html#strings
  5. Not exactly. Sensor pixels are usually just a few um wide; different sensors have different sizes. The demagnification of your lens has to be such that it projects a detail of 800um length over 2 or 4 or so many sensor pixels. In my view, simplifying, if the optic system is fine enough, and you have say a white pixel followed by a black pixel, and the centers of the pixels image two points which are 800um apart, you have enough basis to say that the edge falls inbetween. If you have a white, a grey and a black pixel, you'll say that the edge falls somewhere close to the center of the second pixel. The whiter that, the farther close to the third. To be rigorous you should normalize the image and have a proper mathematical model for the amount of light collected by a pixel; to be sketchy let's say that if x1, x2, x3 are the coordinates of the image projected on the three pixels, and 0<y<1 is the intensity of the second pixel, the edge is presumed to lie at x1+(x2-x1)/2+(x3-x2)*y. This is subpixel interpolation. (a common way this is mathematically done, for point features, is computing the correlation between the image and a template pattern, fitting the center of the correlation peak with a gaussian or a paraboloid, and deriving the subpixel center coordinates by parametrical fit) Then you have to move either the product or the camera perpendicularly to the optical axis, which I presume you'd want perpendicular to the focal plane. But, if you put together an optical system which is able to have in view ~19" in one dimension, why should you need to move at all, to see 20" in the other? Look the net for "Motorized zoom lenses", "zoom camera", or the like. Here are a few links but absolutely no experience, nor endorsement. http://www.theimagingsource.com/products/zoom-cameras/gige-monochrome/dmkz30gp031/ http://www.tokina.co.jp/en/security/cctv-lenses/standard-motorized-zoom-lens/ https://www.tamron.co.jp/en/data/cctv/m_index.html otherwise I would have said, just rig out a pulley connected to a stepper motor, actioning the manual lens focus... Before getting into that, I'd say: choose first a camera using a sensor with the resolution and speed and quality you desire. Once you know the sensor size, look (or ask a representative) for a suitable lens, with enough resolution power, and the capability of imaging 20" at the working distance you choose. Once you have alternatives there, check if by chance you're able already to work at fixed focus with a closed enough iris (https://en.wikipedia.org/wiki/Circle_of_confusion).
  6. Obscure compiler optimizations I would guess. Possibly platform dependent, maybe related to the fact that you create buffers for the empty arrays at the unconnected exit tunnels. Hint: show buffer allocations, I note that for example a buffer is shown as created twice on the output logic array for case 0, and once for case 1. For the limited value that this kind of benchmarking has on a non-RTOS, I get slightly different timings, with a minor decrease between 0 and 1 rather that between 1 and 2: 0 23.96-0.35 24.04-0.20 24-0.29 1 23.6-0.58 2 25.56-0.58 25.52-0.51 3 33.32-0.69 I also remark that timings change slightly if I remove case 3: 0 23.88-0.38 1 23.69-0.49 2 24.41-0.51 and further decrease if I delete the clock wire inside each case connecting to the unused input tunnel of the outermost loop.
  7. my bet is that you could even tuck along with less than 2px/800um, if I understand your problem. I presume 800um lateral resolution, not vertical. I'd say with any decent lens for sure - provided the camera is properly centered on the object, the object is plane, the focal surface of the lens is plane, the lens is well focused, etc. Do you need to keep in focus different PCBs/different cut out recesses at different levels at the same time? If so it is more a question of depth of focus than of FOV. Otherwise adjustable focus may be simpler to motorize than vertical translation. With a telecentric lens you'd also keep the same magnification at any depth, is that as important to you?
  8. 20*25.4/0.8=635. How do you get to 11000? Let's say you allow for some PSF blur and you overresolve your details x4 (which is already much). You'd be then in the range of a 5Mp sensor, nothing special these days. If your illumination conditions and sensor noise are good, you may even able to rely on subpixel interpolation, and get along with a smaller resolution. Normally the working distance is mainly dictated by the space available, based on that and on the sensor size you chose a lens with a suitable focal distance and resolution power. If you are designing the system and have some freedom on the working distance, go for a good lens first and use the resulting distance. Are your objects more than a cm deep? Then I subscribe to telecentric. I don't have rules of thumb, but I would say very tentatively that more than say 5% depth/working distance requires either a large f stop or telecentric. Translation stages add a mechanical complication. Whereas, you can achieve a reasonably precise measurement with deforming optics and fixed point of view, if you calibrate it geometrically.
  9. It is certainly a viable idea, but only for those cases where cell sizes and font properties don't need to be customized on cell basis, which is not possible with array elements. Also, thick borders may be somewhat intrusive. The LV table is lousy, but still a notch more customizable over arrays... Been there, done that... (with some of the optimizations mentioned above)
  10. Right, on second attempt it is working . (I was always choosing from palette, wasn't I?) So did I mistype ExternalNodesEnabled on the first? QD also seems ok. Thx! I guess I can't ask about wire adaptation rules, nor "You linked something to me! Here its help stuff instead of my usual dummy text and link...enjoy!", but I'd have to figure them out, otherwise where's the fun.
  11. And are they supposed to work as such there? Perhaps only with an appropriate key? (I have tried ExternalNodesEnabled=True and XnodeWizardMode=True at no avail). I remarked since many versions the presence in palette of /usr/local/natinst/LabVIEW-2011/user.lib/macros/ExponentialAverage.vim /usr/local/natinst/LabVIEW-2011/user.lib/macros/IncrementArrayElement.vim /usr/local/natinst/LabVIEW-2011/user.lib/macros/IsArrayEmpty.vim /usr/local/natinst/LabVIEW-2011/user.lib/macros/IsValueChanged.vim but never understood what those type limited, odd leftover-like .vi(m!) were dropped there for. In fact, I even happened to grab and modify IsValueChanged.vi, changing its input to variant "for more generality", for perusal in my code. And, as for https://decibel.ni.com/content/docs/DOC-43686 Or am I missing something?
  12. What I'm led to understand is that the OP wants to emulate this: http://www.plexon.com/products/omniplex-software. I presume that various spike templates are drawn at hand with a node tool (more than just one threshold), and that the computational engine does statistical classification of all what is identified as a spike event in the vast stream of incoming electrophysiological data. Classification could label, say, each spike according to the one among the templates which comes closest in rms sense, but PCA may be more sound. Now we are at the level of "how to plot many colored curves". At some point we'll be at "how to draw a polyline with draggable nodes", then "how to compute PCA of a 2D array"... Or? Don't ask, I have my adventures with neourobiologists too...
  13. Is this one relevant (not tried, discovered now)
  14. Hm. Still we started telling the OP - don't look for tricks, don't try to plot all data blindly. I fear that with this or that optimization the idea can hold with acceptable performance up to some ringbuffer limit, taxing cpu and buffer allocations, but at some point, design should kick in or else. I didn't really time it...
  15. FWIW, I realized that in my previous snippet the waveform graph needed several seconds of initialization time just because of the loop setting the plot colors, which vanish if I initialize the ringbuffer with NaNs instead of zeros. (Maybe hiding the graph while setting could do, too, I haven't tried). Also, my snippet with independent loops is clearly just an example and not the real thing, one should plot only if there is really new data and at a lower rate, some mechanism of notification must be in place. As said, application design, not magic bullet. Agree. If changing colors of a group of curves is expensive (and moreover these have to be brought to foreground), something must happen behind the scenes.
  16. Just tried out this out of curiosity, and no, obviously replotting the whole ringbuffer periodically is not a good idea; at least not like this. On my system, it takes about 5 secs each time. But OTOH, to my great surprise, a plain waveform graph with 500 plots is faster: YMMV...
  17. This may depend on many factors, such as what is the acceptable cpu load on your system for that part of the application, the pixel size of your plot, your graphic card, and just about everything else. You have to find it out in your working conditions. From the ringbuffer, you have the data of the oldest curve which you're going to discard, and of the newest which is going to replace it. If it is a waveform graph, you can access its data, replace the relevant wave, and replot everything (that is going to be somewhat slow). You could begin with a waveform graph initialized with N NaN filled waves, and replace them with data as it comes in, to avoid growing, but I doubt that it will make a difference a regime. Maybe the histogram way is still the easiest. Because in its way the histogram flattens the data, but makes still easy to add +1 to the new points and -1 to remove points. For a change you could just binarize the histogram for plotting, and plot in the same color all bins with a count>0. The limitation of the histogram in the form I sketched is that bins are filled according to the datapoint falling in them and not according to the segment traversing them, but perhaps that can be worked out too. I mean, complex requirement, complex solution to be sought.
  18. Plotting over a flattened pixmap (rather than incrementally adding to the picture new plot data, that's the trick) can be fast, and has the same cost no matter the previous image content. Like this: I guess that periodically replotting the whole lot of curves would become too expensive even this way. In a case like this perhaps one should manipulate directly the pixmap, like erasing the pixels of the oldest curve first with an xor operation, and then overplotting the newest. Unfortunately the internals of the picture functions, for attempting something like that, are not well documented, IIRC. I think that, much time ago, I ran into some contribution which cleverly worked them out. Maybe some other member is able to fill in?
  19. The trick is to get the Edit Position first, set Key Focus to false, get the (now updated) value and process it, set Key Focus to true again and then restore the previous Edit Position. For an application of mine I had to process arrows with and without modifiers, tabs and whatnot, sometimes jumping columns besides correcting the entered values, and allow only the input of [0-9] in some cells and text in others - you can go quite far in tweaking the standard table navigation behaviour if you invest time in implementing all cases.
  20. I have somehow the impression that you're assuming that there should be some magic option of the LV plot widget as is, which saves from application design. What you show about hand drawing a template curve, and classifying traces, hints that there is more computation going on behind the scenes. The display in the plot window must be a somewhat sophisticated GUI, only giving the user a feeling of what data is been accumulated and how to interact with it, quite likely not doing the mistake of replotting the whole dataset for every new curve acquired (that would be expensive). The right question might be whether you can use the LV graph widget for what it gives, or rather the picture widget as Shaun suggested above (that would overplot directly to the pixmap, I guess, and be significantly faster), as a starting point for building such a GUI. The LV graph widget, optimized as it may be, is slow because it is complex: it handles autoscaling, abitrary number of curves, arbitrary plotting stiles, filling and whatnot, and this comes with a price. Someone may correct me, but I don't even think it can handle adding incrementally new curves, without replotting the whole bunch. If I would have to build something the like, I would store the data in memory appropriately and without connection to the graph; I would consider to update that display only periodically, fast enough to give the user a feeling of interactivity, and track user clicks on the graph area in order to replot the data with the right highlights, to give the user the impression s/he is playing with it.
  21. I think you should part with the idea of regenerating a plot at every new event; storing vast data for later analysis is one thing, display for GUI feedback another. Instead of trying to plot 5000 curves at every iteration of the loop, you could try maintaining a different loop plotting, at lower rate, the most recent N, with N suitably large. Another approach I thought of, is using an intensity plot to display a 2D histogram. Grid the time-amplitude space in bins, and increment each bin every time a curve passes in it (you have to write the code for it, I don't know of a ready source); thus you get a plot similar to that of a scope with infinite persistence, brighter where traces overlap. Different loop and lower rate, ditto.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.