-
Posts
565 -
Joined
-
Last visited
-
Days Won
25
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ensegre
-
Ok, I traced it down to that, for me, muParser Demo.vi crashes on its first call of DSDisposePtr in mupGetExprVars.vi. Just saying.
- 172 replies
-
I'm opening some random subvi to check which one segfaults. All those I opened were saved with allow debugging off and separate compiled code off (despite your commit message on github). Any reason for that?
- 172 replies
-
Just mentioning, if not OT. Something else not supported are booleans (ok you could use 0 and 1 and + *). In a project of mine I ended up using this, which is fine but simplistic. I don't remember about performance; considering my application it may well be that simple expressions evaluated in less than a ms.
-
It appears that it might be straightforward to make this work on linux too. In fact, I found out that I had libmuparser2 2.2.3-3 already on my system, for I dunno which other dependency need. Would you consider to make provisions for cross platform? Usually I wire the CLN library path to a VI providing the OS relevant string through a conditional disable; LV should have it's own way like writing just the library name without extension, to resolve it as .dll or .so in standard locations, but there may be variants. I just gave a quick try, replacing all dll paths to my /usr/lib/x86_64-linux-gnu/libmuparser.so.2 (LV2017 64b), I get white arrows and a majestic LV crash as I press them, subtleties. I could help debugging, later, though. Also, to make your wrapper work with whatever version of muparser installed systemwise, how badly does your wrapper need 2.2.5.modified? How about a version check on opening?
- 172 replies
-
Perhaps you stacked the decoration frontmost, so that it prevents clicking on the underlying controls? Try selecting the beveled rectangle and Ctrl-Shift-J (last pulldown menu on the toolbar)
-
-
Provided that the communication parameters are correct, probably you should initialize once, read no too often and close only when you're really done. Trying to do that 10000 times per second usually impedes communication.
-
If it is this one, you could test communications first with their software, as per page 46 of the manual, to exclude that you have wired the interface incorrectly, or that the usb dongle is defective.
-
it looks wrong from scratch that you're repeatedly initializing and closing the port in a full throttled loop. Anyway, first things first, not knowing what your device is, and whether the communication parameters and the register address are the right ones, there is little we can say beyond "ah, it doesn't work". It might even be that you didn't wire the device correctly. Is that 2 wire RS485 or 4? Are you positive about polarities? Do the VIs give some error?
-
It occurs to me that the 5114 is an 8bit digitizer, so the OP could just get away with 10MB/s saving raw data. Well less actually, if I get what the OP means, it is 1000 samples acquired at 10Msps, triggered every ms, so only 1MB/s.
-
His TDMS attempt tries indeed to write a single file, but timestamps the data when it is dequeued by the consumer loop. His first VI, however, uses a Write Delimited Spreadsheet.vi within a while loop, with a delay of 1ms (misconception, timing is anyway dictated by the elements in queue) and a new filename at each iteration.
-
Quite likely this is a bad requirement, and the combination of your OS/disks is not up to it, and won't be unlike you make special provisions for it - like controlling the write cache and using fast disk systems. The way to go imho is to stream all this data in big files, with a format which enables indexed access to the specific record. If your data is fixed, like e.g. 1000 doubles + one double as timestamp, even just dumping everything to a binary file and retrieving it by seek & read is easy (proviso - disk writes are way more efficient if unbuffered and write an integer number of sectors at a time). TDMS, etc, adds flexibility, but at some price (which probably you can afford to pay at only 80MB/sec and a reasonably fast disk); text is the way to spoil completely speed and compactness with formatting and parsing, with the only advantage of human readability. You say timing is critical to your postprocessing; but alas, do you postprocess your data by rereading it from the filesystem, and expect to do it with low latency? Do you need to postprocess your data online in real time or offline? Do you care for timestamping your data the moment it is transferred from the digitizer into memory (which already lags behind the actual acquisition, obviously), not at the moment of writing to disk, I hope?
-
Out of memory error when using Picture Control
ensegre replied to Neil Pate's topic in User Interface
If relevant, I got the impression that the picture indicator queues its updates. I don't know what is really going under the hood, but presume that whatever it is, it should be happening in the UI thread. In circumstances also not clear to me, I observed that the picture content may lag many seconds behind its updates, with a corresponding growing memory occupancy, and seemingly weird, kept streaming in the IDE even seconds after the containing VI stopped. I suspect that a thread within the UI thread is handling the content queue, and that this might be impeded when intensive UI activity is taking part. Is this your case? I actually observed this most, while developing an Xcontrol built around a picture indicator. My observation was that invariably after some editing the indicator became incapable of keeping up with the incoming stream, for a given update rate, zoom, UI activity, etc. However, closing and reopening the project restored the pristine digesting speed. -
I would say: the fields which are more beneficial are those which are useful to the problem you have to deal with. You do signals, functional analysis is good. You do computational geometry, geometry is good. You do image processing... you name it. Labview is only a programming tool. It's not that you become more proficient in Labview because you know a special branch of maths, such as, you know graph theory, you are good to grasp diagrams. [in fact LV diagrams are just a representation of dataflow, more kin to an electronics diagram than to formal graph theory]. Rather, on general terms, I would say numerical analysis and sound principles of algorithm design really help. How to make an efficient algorithm for doing X, how truncation errors propagate, how to optimize resource use, etc. But this is true of any programming language used for practical problem solving. Formal language theory, compilers - not really, LV conceals these details from you. Unless your task is to implement a compiler in G...
-
As an aside: I realize that the computation of the current pixel coordinates could be avoided using, like you did, however it seems that these coordinates are not always polled at the right time; for instance I get {-1,-1} during mouse scroll. That might be part of the problem...
-
This is an imperfect solution from a project of mine. A scroll of the mouse wheel zooms in or out by a factor sqrt(sqrt(2)), centering the zoom on the pixel upon which the cursor lies. The arithmetics of that is easy, it just involves that {ox,oy}->{px,py}-{px-ox,py-oy}*z1/z2, where {ox,oy} is the origin and {px, py} are the image coordinates of the pixel pointed at. That is, the new origin is just moved proportionally along the line connecting the old origin and the current pixel, all in image coordinates. Differently than you, I haven't implemented limits on the zoom factor based on the image size and position, perhaps one should.
-
[collided in air with infinitenothing] single port GigE is ~120MBytes/sec. you're not talking of bits/sec? Or of a dual GigE (I ran into one once)? GigE (at least Genicam compliant) is supported by IMAQdx. Normally I just use high level IMAQdx blocks (e.g. GetImage2) and get good throughput, whatever is under the hood. But a camera driver might be in the way for a less efficient than ideal transfer.
-
Git. Because my IT, begged for centralized SCC of some sort, settled for an intranet installation of gitlab, which I'm perfectly fine with. But I'm essentially a sole developer, so SCC is to me more for version tracking than for collaboration. Tools: git-cola and command line. Reasonably happy with Gitkraken, using git-gui on windows as a fallback.
-
This way perhaps? (or maybe this one since now we're all obsoleting out)
- 9 replies
-
- crash
- troubleshooting
-
(and 1 more)
Tagged with:
-
Also (I'm on linux, desktop), Crash logger.vi is broken because of a missing /<vilib>/nisysconfig/Close.vi and Close (System).vi (This VI needs a driver or toolkit component that is not found. Missing resource file "nisysapi.rc). And, perhaps as a consequence, the "System Session" nodes miss any of the properties they're supposeed to have. Was the code really tested on linux? [maybe on some RT NI-linux, which I miss?]
- 9 replies
-
- crash
- troubleshooting
-
(and 1 more)
Tagged with:
-
Plotting sum of sine harmonics with labview using for cycle
ensegre replied to paolatavernise's topic in LabVIEW General
-
Automatically Adding Build Date To Front Panel
ensegre replied to Taylorh140's topic in LabVIEW General
A potential caveat: I've used this pattern in the past to generate a VI with a default string value containing build date and git version, and included it in the project I was building. Only, when I tried to use it as a prebuild action, most of the time I got spectacular LV crashes, recoverable only by clearing the compiled object cache. I presume that something becomes stale there if the VI is marked as unmodified but in fact it is during the build. I gave up tracking down the issue, just resolved to run my tag-generating VI just before build manually, with the project closed. That was in LV2014 and 2015 at the time. I saved some logs; the cryptic errors I used to get were of this sort: Error 1124 occurred at ... Possible reason(s): LabVIEW: VI is not loadable. (a perfectly loadable and unrelated VI) DAbort 0x1A7102DF in fpsane.cpp ... Someother.vi (another sane and unrelated VI) The build was unsuccessful. Possible reasons An error occurred while building the application. Either you do not have the correct permissions to create the application at the specified location or the application is in use. Invoke Node in AB_EXE.lvclass:Build.vi->AB_Engine_Build.vi->AB_Build_Invoke.vi->AB_Build_Invoke.vi.ProxyCaller <APPEND> Method Name: <b>Build:Application</b> Details Click the link below to visit the Application Builder support page. >Use the following information as a reference: Error 8 occurred at AB_EXE.lvclass:Build.vi -> AB_Engine_Build.vi Possible reason(s): LabVIEW: File permission error. You do not have the correct permissions for the file. \=========================\ NI-488: DMA hardware error detected. (NI-488 DMA? WTH?) Error 1 occurred at EndUpdateResourceA.vi Possible reason(s): LabVIEW: An input parameter is invalid. For example if the input is a path, the path might contain a character not allowed by the OS such as ? or @. \=========================\ NI-488: Command requires GPIB Controller to be Controller-In-Charge.- 8 replies
-
- configuration
- lvscripting
-
(and 1 more)
Tagged with:
-
If you do images, or call something inside a DLL, nothing would be too insane. But I guess you already did your homework in trying to track that down. What look strange are the saturation at 3 GB and then the sudden drops and recovers. Makes me suspect of a corner problematic case of LV's garbage collector... I don't know if it helps, your post reminded me of this old discussion. There I hijacked the thread to complain about what definitely turned out to be a bug of LV webserver, which appeared in one LV version and was silently covered up a couple of versions afterwards. That thread goes a bit on on the tone "trimming has nothing to do with a bug", "yes there is a bug", but essentially is about a call to the windows API to trim the process working set, which might be of some use to your testing.
-
reading LCD characters(16X2)
ensegre replied to Nishar Federer's topic in Machine Vision and Imaging
Will the smart cam run OCR onboard? If that is not required, a properly placed webcam and a couple of leds might just do, for much less. As Tim_S wrote, the art is setting things up for getting always a clean image. The OP doesn't say if his next question would be then how to use IMAQ, image preprocessing, OCR and all what involved. -
It occurs to me that maybe only NI-SCOPE cards have real trigger inputs. But for normal DAQ cards, you could use some scheme in which, even with software start, you first start the acquisition on the event channel and on a fiducial channel, then you output the control signal which is also routed to the fiduciary input. Since relative timing of the sampled data is deterministic (channels are either simultaneously sampled in high end cards, round-robin in lower), analysis of the two sampled signals should give you the answer.