Jump to content

ensegre

Members
  • Posts

    583
  • Joined

  • Last visited

  • Days Won

    26

Everything posted by ensegre

  1. On linux, avconv is powerful, command line, and options rich. I have used it routinely to create .mov from .wmv, for instance. For automatic conversion, I imagine one could set up a script which checks for new files in a given folder and initiates conversion after they have stopped growing since longer than N seconds, or something the like. It never occurred to me to stream contents through some pipe for conversion on the fly, but that might be possible too. If windows is required, short of seeing if something can be run in MinGW/Cygwin, I see that libav provides windows builds, but I haven't looked into operation.
  2. (probably echoes what also said in the other thread): If the reference can become invalid in the course of the iterations, and I want to check it specifically at each iteration, I'd put a shift register; if it is sufficient to know what was the initial ref, I'll go for 1. or 3. If reference going invalid in the course of the execution of the inner VI implies an error at output, I'd put a shift register on the error wire. But most importantly: if there is any chance that the loop executes 0 times, use 2., not 1.
  3. https://en.wikipedia.org/wiki/Lakh
  4. Thread starving might become an issue eventually, but only if there are some thread-locking calls, e.g. orange CLFN. But anyway, I would give some thought about whether the proposed architecture is really sound and couldn't be factored out differently. For example, would it scale gracefully if a fourth, a fifth instrument and so on will need to be added later on? In the middle level, why a cluster of queue references needs to live on the shift register? How does the producer loop address the right queue, and how easy would it be to add more commands/instruments? What would be the stop conditions of the inner and outer while? Is it ok that the each "measurement" command produced by the top level translates to independently executed instrument commands, or is there some sequentiality and interdependence to be accounted for?
  5. 86 means labview 8.6? At that time IIRC IMAQdx was provided as an unsupported addon rather than as a device driver component, so location and probably naming of vaguely equivalent VIs may have changed. "IMAQ USB Init.VI" may be roughly "IMAQdx Open Camera.VI", for one. It may be that you can still download the package from NI Support, but if you have LV2014, the sane thing would be to use IMAQdx coming with it.
  6. Are you sure it is not diaphony of your sound card? Does it happen with other sound generators as well on the same system?
  7. I was considering that the normal OS behaviour, for clicks outside a window in foreground, in a windowed OS, is to bring to foreground another application. In such case LV keeps running as a background application, having no notion of concurrent processes and composting of windows assigned to them by the OS. So no wonder that LV itself has no notion of "my window is not anymore in foreground", but only, at best, of "Is the front panel of this VI the frontmost among all LV windows". So trapping of external mouse clicks might be possible only through OS calls, polling the focus assigned to application windows by the OS. What is the UI effect sought, though? If it is preventing or trapping outside clicks, what about workarounds like a) make the said FP modal, b) create a dummy transparent VI (not available in linux, I fear) with a FP devoid of any toolbar and scrollbar, which fills the screen for the sole purpose of trapping clicks?
  8. Again, refer to the examples above, just give the same color to each ActivePlot. A strategy could be not to delete plots, just to replace data with NaN to keep the number of plots constant. Another may be to set the Plot.Visible? property to false individually for the plots to be hidden, but having to do that in a loop may be slow too. Try.
  9. Enforcing the plot color as was already shown above, and commented as being somewhat slow. To the point that a better strategy could be not to change the colors but to cycle the data.
  10. If python-like indexing is implied, the OP is probably wrong. $ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> a=[7,6,5,4,3,2,1] >>> a[2:] [5, 4, 3, 2, 1] >>> a[:3] [7, 6, 5] >>> a[-1] 1 >>> a[4] 3 >>> a[-4:-2] [4, 3] https://docs.python.org/3/tutorial/introduction.html#lists https://docs.python.org/3/tutorial/introduction.html#strings
  11. Not exactly. Sensor pixels are usually just a few um wide; different sensors have different sizes. The demagnification of your lens has to be such that it projects a detail of 800um length over 2 or 4 or so many sensor pixels. In my view, simplifying, if the optic system is fine enough, and you have say a white pixel followed by a black pixel, and the centers of the pixels image two points which are 800um apart, you have enough basis to say that the edge falls inbetween. If you have a white, a grey and a black pixel, you'll say that the edge falls somewhere close to the center of the second pixel. The whiter that, the farther close to the third. To be rigorous you should normalize the image and have a proper mathematical model for the amount of light collected by a pixel; to be sketchy let's say that if x1, x2, x3 are the coordinates of the image projected on the three pixels, and 0<y<1 is the intensity of the second pixel, the edge is presumed to lie at x1+(x2-x1)/2+(x3-x2)*y. This is subpixel interpolation. (a common way this is mathematically done, for point features, is computing the correlation between the image and a template pattern, fitting the center of the correlation peak with a gaussian or a paraboloid, and deriving the subpixel center coordinates by parametrical fit) Then you have to move either the product or the camera perpendicularly to the optical axis, which I presume you'd want perpendicular to the focal plane. But, if you put together an optical system which is able to have in view ~19" in one dimension, why should you need to move at all, to see 20" in the other? Look the net for "Motorized zoom lenses", "zoom camera", or the like. Here are a few links but absolutely no experience, nor endorsement. http://www.theimagingsource.com/products/zoom-cameras/gige-monochrome/dmkz30gp031/ http://www.tokina.co.jp/en/security/cctv-lenses/standard-motorized-zoom-lens/ https://www.tamron.co.jp/en/data/cctv/m_index.html otherwise I would have said, just rig out a pulley connected to a stepper motor, actioning the manual lens focus... Before getting into that, I'd say: choose first a camera using a sensor with the resolution and speed and quality you desire. Once you know the sensor size, look (or ask a representative) for a suitable lens, with enough resolution power, and the capability of imaging 20" at the working distance you choose. Once you have alternatives there, check if by chance you're able already to work at fixed focus with a closed enough iris (https://en.wikipedia.org/wiki/Circle_of_confusion).
  12. Obscure compiler optimizations I would guess. Possibly platform dependent, maybe related to the fact that you create buffers for the empty arrays at the unconnected exit tunnels. Hint: show buffer allocations, I note that for example a buffer is shown as created twice on the output logic array for case 0, and once for case 1. For the limited value that this kind of benchmarking has on a non-RTOS, I get slightly different timings, with a minor decrease between 0 and 1 rather that between 1 and 2: 0 23.96-0.35 24.04-0.20 24-0.29 1 23.6-0.58 2 25.56-0.58 25.52-0.51 3 33.32-0.69 I also remark that timings change slightly if I remove case 3: 0 23.88-0.38 1 23.69-0.49 2 24.41-0.51 and further decrease if I delete the clock wire inside each case connecting to the unused input tunnel of the outermost loop.
  13. my bet is that you could even tuck along with less than 2px/800um, if I understand your problem. I presume 800um lateral resolution, not vertical. I'd say with any decent lens for sure - provided the camera is properly centered on the object, the object is plane, the focal surface of the lens is plane, the lens is well focused, etc. Do you need to keep in focus different PCBs/different cut out recesses at different levels at the same time? If so it is more a question of depth of focus than of FOV. Otherwise adjustable focus may be simpler to motorize than vertical translation. With a telecentric lens you'd also keep the same magnification at any depth, is that as important to you?
  14. 20*25.4/0.8=635. How do you get to 11000? Let's say you allow for some PSF blur and you overresolve your details x4 (which is already much). You'd be then in the range of a 5Mp sensor, nothing special these days. If your illumination conditions and sensor noise are good, you may even able to rely on subpixel interpolation, and get along with a smaller resolution. Normally the working distance is mainly dictated by the space available, based on that and on the sensor size you chose a lens with a suitable focal distance and resolution power. If you are designing the system and have some freedom on the working distance, go for a good lens first and use the resulting distance. Are your objects more than a cm deep? Then I subscribe to telecentric. I don't have rules of thumb, but I would say very tentatively that more than say 5% depth/working distance requires either a large f stop or telecentric. Translation stages add a mechanical complication. Whereas, you can achieve a reasonably precise measurement with deforming optics and fixed point of view, if you calibrate it geometrically.
  15. It is certainly a viable idea, but only for those cases where cell sizes and font properties don't need to be customized on cell basis, which is not possible with array elements. Also, thick borders may be somewhat intrusive. The LV table is lousy, but still a notch more customizable over arrays... Been there, done that... (with some of the optimizations mentioned above)
  16. Right, on second attempt it is working . (I was always choosing from palette, wasn't I?) So did I mistype ExternalNodesEnabled on the first? QD also seems ok. Thx! I guess I can't ask about wire adaptation rules, nor "You linked something to me! Here its help stuff instead of my usual dummy text and link...enjoy!", but I'd have to figure them out, otherwise where's the fun.
  17. And are they supposed to work as such there? Perhaps only with an appropriate key? (I have tried ExternalNodesEnabled=True and XnodeWizardMode=True at no avail). I remarked since many versions the presence in palette of /usr/local/natinst/LabVIEW-2011/user.lib/macros/ExponentialAverage.vim /usr/local/natinst/LabVIEW-2011/user.lib/macros/IncrementArrayElement.vim /usr/local/natinst/LabVIEW-2011/user.lib/macros/IsArrayEmpty.vim /usr/local/natinst/LabVIEW-2011/user.lib/macros/IsValueChanged.vim but never understood what those type limited, odd leftover-like .vi(m!) were dropped there for. In fact, I even happened to grab and modify IsValueChanged.vi, changing its input to variant "for more generality", for perusal in my code. And, as for https://decibel.ni.com/content/docs/DOC-43686 Or am I missing something?
  18. What I'm led to understand is that the OP wants to emulate this: http://www.plexon.com/products/omniplex-software. I presume that various spike templates are drawn at hand with a node tool (more than just one threshold), and that the computational engine does statistical classification of all what is identified as a spike event in the vast stream of incoming electrophysiological data. Classification could label, say, each spike according to the one among the templates which comes closest in rms sense, but PCA may be more sound. Now we are at the level of "how to plot many colored curves". At some point we'll be at "how to draw a polyline with draggable nodes", then "how to compute PCA of a 2D array"... Or? Don't ask, I have my adventures with neourobiologists too...
  19. If it helps persistence5.vi
  20. Is this one relevant (not tried, discovered now)
  21. Hm. Still we started telling the OP - don't look for tricks, don't try to plot all data blindly. I fear that with this or that optimization the idea can hold with acceptable performance up to some ringbuffer limit, taxing cpu and buffer allocations, but at some point, design should kick in or else. I didn't really time it...
  22. "mine is faster..."
  23. FWIW, I realized that in my previous snippet the waveform graph needed several seconds of initialization time just because of the loop setting the plot colors, which vanish if I initialize the ringbuffer with NaNs instead of zeros. (Maybe hiding the graph while setting could do, too, I haven't tried). Also, my snippet with independent loops is clearly just an example and not the real thing, one should plot only if there is really new data and at a lower rate, some mechanism of notification must be in place. As said, application design, not magic bullet. Agree. If changing colors of a group of curves is expensive (and moreover these have to be brought to foreground), something must happen behind the scenes.
  24. Just tried out this out of curiosity, and no, obviously replotting the whole ringbuffer periodically is not a good idea; at least not like this. On my system, it takes about 5 secs each time. But OTOH, to my great surprise, a plain waveform graph with 500 plots is faster: YMMV...
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.