Jump to content

ensegre

Members
  • Posts

    550
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by ensegre

  1. For the record I obstinately still tried this, following smithd. Because of "current VI's path" this cannot be inlined, at most given subroutine priority. The result with the previous benchmarks is ~600ns overhead (on ~12us, 5%) on 32bit and ~200ns (on 10.4us, 2%) on 64 bit. Results obviously degrade for less than subroutine priority. mupLibPath.vi
  2. So I unzip that on my desktop. This means "/home/owner/Desktop/LabVIEW 2015/Projects/LV-muParser" and "/home/owner/Desktop/LabVIEW 2015/resource/" . Go figure, "/<resource>/libmuparserX32.so.2.2.5" not found...
  3. pre-build VI? Possibly... instinctively I'd say the less code the better, but have not benchmarked that.
  4. Well, let's be pedantic: CLNbenchmark.vi mupLibPath2.vi (change the library names according to bitness and OS) there is a huge jitter in this benchmarking, to get ~1% variance I need to iterate 1M times, so results are not entirely conclusive testing in parallel like I did earlier was probably not a good idea, suppose libmuparser is single-threaded - indeed running 2 or 3 loops in parallel changes the picture. Testing individually each loop, static library name wins, by perhaps 1% (100ns on 10us) Testing in parallel, each timing is larger (~60%), and jitter is even larger, no clear winner in few attempts I wouldn't be surprised if YMMV in other machines, OS, bitnesses and LV versions I tested on linux LV2015 64bit, 2016 64bit and 2017 32 and 64 bit, results were similar and similarly scattered. On LV2017 32bit I got timings ~25% higher. don't even mention boundary conditions like FP activity Moral, even considering that some time is spent in the libmuparser routines themselves, and some is LV calling overhead, and that the ratio may change from function to function, this overhead is minute, and imho it pays much more to go for the most maintainable solution [hint: inlined subvi with indicator whose default value is set by an install script]. YMMV...
  5. This small experiment, if not flawed, would suggest that actually CLN with a name constant is a few tens of ns faster than CLN with name written into. CLNbenchmark.vi
  6. A supreme one, but proper scripting might be able to alleviate it. Scripts for checking that each CLN is properly conditionally wrapped, and that each conditional disable has the relevant library in its cases. I understand that performant parsing was one of the main motivations of this exercise; however, maybe let's first benchmark precisely the impact of different solutions.
  7. Yesterday I played a little around with constructing an absolute path inside muLibPath.vi, kind of how you did now in the windows case, and the benchmark started crawling. To keep the libraries in the same dir as muLibPath.vi may simplify matters in the IDE but right, you have to do something for the exe. So yes, either devising the path in Construct.vi and passing it along with the class data, or having an installation script fixing the return value of an inlined subvi and saving it, look to me ways to go.
  8. Maybe that is an output only argument to mupGetExprVar? It would make little sense that the call sequence has a completely transparent argument. In any case, have you noted the shift register for that pointer? This should keep the case of N=0 exprs from crashing. Apropo which, mupGetExprVars still crashes if called with Parser in=0, that should perhaps be trapped in a production version. yes. Probably sloppiness of the author who didn't push the version number everywhere (I added the X32 and X64 in the names on my own, though). About the choice of the library, I'm still thinking at the best strategy for muLibPath.vi. If one puts more complex path logics into it, that affects very badly performance, as the VI is called for every single CLN. Probably the best is to make an inlined VI out of it, containing only a single path indicator, whose default value is assigned once for good by an installation script. I hope that the compiler is then able to optimize it as a hardwired constant.
  9. Debugging off, I realized that it is for inlining; compiled code, I downloaded your first commit on LAVA rather than the latest github. 2015, please: mupLib.zip If ever useful: libmuparser.zip compiled .so for X32 and 64 on ubuntu16. As they are dynamic, I doubt they may be themselves dependent on some other system lib, and hence particular of that distro. I'd also think at a benchmark which iterates on repeated open/evaluate/close. I see it as a frequent use case, and suspect that muparser might do better than the formula parser there.
  10. 6 PC. We mean, what do you take images for? What is the bandwidth? Are the cameras synchronized? Do the images need to be analyzed together, so that some program decides something according to what is in them? Can the computers exchange messages? What latencies are tolerated? Do the images need to be streamed disk? All these are considerations you have to take into account for a design.
  11. Cracked it finally. You were passing the wrong pointer to DSDisposePtr. Here mupLib-path.zip is a corrected version of the whole lvlib. Now all examples and benchmarks run for me in LV2017 32&64, 64bit being marginally faster with the default eq in the benchmark
  12. Ok, reporting progress: compiled a 32bit .so of your modified library sudo apt-get install g++-multilib cd muparser-2.2.5 ./configure --build=x86_64-pc-linux-gnu --host=i686-pc-linux CC="gcc -m32" CXX="g++ -m32" LDFLAGS="-L/lib32 -L/usr/lib32 -Lpwd/lib32 -Wl,-rpath,/lib32 -Wl,-rpath,/usr/lib32" make clean make patched muParser.lvlib, to include a target and bitness dependent libmuparser path. Attached, with compiled code removed (orderly: I should submit a pull request on github). mupLib-path.zip Testing on LV2017-32bit: mupLib example.vi WORKS all other examples crash when mupGetExprVars gets to DSDisposePtr, with trace like *** Error in `labview': free(): invalid pointer: 0xf48c3a40 *** Aborted (core dumped) Testing on LV2017-64bit, with system libmuparser 2.2.3 Ditto. Only difference, longer pointer in *** Error in `labview64': free(): invalid pointer: 0x00007f03aa6b78e0 ***
  13. Ok, I traced it down to that, for me, muParser Demo.vi crashes on its first call of DSDisposePtr in mupGetExprVars.vi. Just saying.
  14. I'm opening some random subvi to check which one segfaults. All those I opened were saved with allow debugging off and separate compiled code off (despite your commit message on github). Any reason for that?
  15. Just mentioning, if not OT. Something else not supported are booleans (ok you could use 0 and 1 and + *). In a project of mine I ended up using this, which is fine but simplistic. I don't remember about performance; considering my application it may well be that simple expressions evaluated in less than a ms.
  16. It appears that it might be straightforward to make this work on linux too. In fact, I found out that I had libmuparser2 2.2.3-3 already on my system, for I dunno which other dependency need. Would you consider to make provisions for cross platform? Usually I wire the CLN library path to a VI providing the OS relevant string through a conditional disable; LV should have it's own way like writing just the library name without extension, to resolve it as .dll or .so in standard locations, but there may be variants. I just gave a quick try, replacing all dll paths to my /usr/lib/x86_64-linux-gnu/libmuparser.so.2 (LV2017 64b), I get white arrows and a majestic LV crash as I press them, subtleties. I could help debugging, later, though. Also, to make your wrapper work with whatever version of muparser installed systemwise, how badly does your wrapper need 2.2.5.modified? How about a version check on opening?
  17. Perhaps you stacked the decoration frontmost, so that it prevents clicking on the underlying controls? Try selecting the beveled rectangle and Ctrl-Shift-J (last pulldown menu on the toolbar)
  18. Provided that the communication parameters are correct, probably you should initialize once, read no too often and close only when you're really done. Trying to do that 10000 times per second usually impedes communication.
  19. If it is this one, you could test communications first with their software, as per page 46 of the manual, to exclude that you have wired the interface incorrectly, or that the usb dongle is defective.
  20. it looks wrong from scratch that you're repeatedly initializing and closing the port in a full throttled loop. Anyway, first things first, not knowing what your device is, and whether the communication parameters and the register address are the right ones, there is little we can say beyond "ah, it doesn't work". It might even be that you didn't wire the device correctly. Is that 2 wire RS485 or 4? Are you positive about polarities? Do the VIs give some error?
  21. It occurs to me that the 5114 is an 8bit digitizer, so the OP could just get away with 10MB/s saving raw data. Well less actually, if I get what the OP means, it is 1000 samples acquired at 10Msps, triggered every ms, so only 1MB/s.
  22. His TDMS attempt tries indeed to write a single file, but timestamps the data when it is dequeued by the consumer loop. His first VI, however, uses a Write Delimited Spreadsheet.vi within a while loop, with a delay of 1ms (misconception, timing is anyway dictated by the elements in queue) and a new filename at each iteration.
  23. Quite likely this is a bad requirement, and the combination of your OS/disks is not up to it, and won't be unlike you make special provisions for it - like controlling the write cache and using fast disk systems. The way to go imho is to stream all this data in big files, with a format which enables indexed access to the specific record. If your data is fixed, like e.g. 1000 doubles + one double as timestamp, even just dumping everything to a binary file and retrieving it by seek & read is easy (proviso - disk writes are way more efficient if unbuffered and write an integer number of sectors at a time). TDMS, etc, adds flexibility, but at some price (which probably you can afford to pay at only 80MB/sec and a reasonably fast disk); text is the way to spoil completely speed and compactness with formatting and parsing, with the only advantage of human readability. You say timing is critical to your postprocessing; but alas, do you postprocess your data by rereading it from the filesystem, and expect to do it with low latency? Do you need to postprocess your data online in real time or offline? Do you care for timestamping your data the moment it is transferred from the digitizer into memory (which already lags behind the actual acquisition, obviously), not at the moment of writing to disk, I hope?
  24. If relevant, I got the impression that the picture indicator queues its updates. I don't know what is really going under the hood, but presume that whatever it is, it should be happening in the UI thread. In circumstances also not clear to me, I observed that the picture content may lag many seconds behind its updates, with a corresponding growing memory occupancy, and seemingly weird, kept streaming in the IDE even seconds after the containing VI stopped. I suspect that a thread within the UI thread is handling the content queue, and that this might be impeded when intensive UI activity is taking part. Is this your case? I actually observed this most, while developing an Xcontrol built around a picture indicator. My observation was that invariably after some editing the indicator became incapable of keeping up with the incoming stream, for a given update rate, zoom, UI activity, etc. However, closing and reopening the project restored the pristine digesting speed.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.