Jump to content

ShaunR

Members
  • Content Count

    4,114
  • Joined

  • Days Won

    216

Posts posted by ShaunR

  1. On 8/5/2020 at 12:57 AM, JKSH said:

    Although text files don't support concurrent writes, SQLite does.

    SQLite does not support concurrent writes.

    Quote

    Only one process at a time can hold a RESERVED lock. But other processes can continue to read the database while the RESERVED lock is held.

    If the process that wants to write is unable to obtain a RESERVED lock, it must mean that another process already has a RESERVED lock. In that case, the write attempt fails and returns SQLITE_BUSY.

     

  2. 18 minutes ago, hooovahh said:

    So while NI marketing may think they are making it loud and clear, the community has also been pretty loud themselves with their statements that they aren't sure what NI was trying to say.

    That's because it is, to all intents and purposes, an internal restructuring (possibly a political one) and the outward effects aren't tangible or possibly even unknown. 

  3. On 5/28/2020 at 10:26 PM, rharmon@sandia.gov said:

    Message Format(Binary): Header Type Length Report ID Data Check Sum, Header 1byte, Type 1 byte, Length 1 byte, report ID 1 byte, Data 64 bytes max, Check Sum 1 byte.

    this is a little ambiguous.

    Header Type Length Report ID Data Check Sum...Makes sense if you put commas in the right place

    Header Type, Length, Report ID, Data, Check Sum.

    If that was the case then a msg with no data would be something like

    7E 03 02 18 9B

    On 5/28/2020 at 10:26 PM, rharmon@sandia.gov said:

    (sum the bytes from header to data)

    Is this your interpretation or is it stated as such? Usually a checksum is a CRC. if it is a CRC-8 (there are a number of 1 byte CRCs) then the last value would be 0x29 rather than 0x9B, for example.

     

  4. 13 hours ago, Rolf Kalbermatter said:

    Your experience seems quite different than mine. I get never the soundless plop disappearance, but that might be because I run my shared library code during debugging from the Visual C debug environment. But even without that, the well known 1097 error is what this wrapper is mostly about and that happens for me a lot more than a completely silent disappearence.

    I haven't seen that error message for years. If I run a debugger, LabVIEW just dies and the debugger reports an error in the LabVIEw exe. This has been the same through Windows 7-10 on the various machines I've had over the years. Maybe the difference when debugging is because I use the the gdb debugger but the sudden disappearance is consistent; not only on my machines but customers' too.

  5. 9 hours ago, dadreamer said:

    Even when Error Checking is set to Disabled, LabVIEW still enters ExtFuncWrapper to do some basic checks before the target function is called. A few internal functions, such as _clearfp and _controlfp, are being called also. Thereby disabling "Generate Wrapper" option should make CLFN a little faster, than disabling Error Checking. You can take it like you're calling a built-in yellow node (not taking into account the called function's own speed, of course). I did not do concrete benchmarking to compare these two options. If there's an interest, I could check this out.

    No need. I will explore this. From experience; a misconfigured CLFN usually results in LabVIEW disappearing without a whimper (either immediately or at some random moment) so i don't see much of a reason to have error checking and wrappers enabled at all. Especially if there is a performance benfit, no matter how minute.

    It doesn't seem to have a scripting counterpart. Is that correct, or have I just missed it?

    Is the setting sticky, or does distributed source code require the INI setting too?

  6. On 5/22/2020 at 1:54 PM, dadreamer said:

    Here are some "secret" INI keys related to CLFNs:

    • extFuncGenNoWrapperEnabled=True - the most interesting one here; adds "Generate Wrapper" option to CLFN context menu (on by default). When this option is unchecked (off), LabVIEW doesn't generate the wrapper for DLL call, i.e. ExtFuncWrapper function is not called and the user's function is inlined into the VI code and called. This feature could slightly improve performance of external code communications, saving approx. 5 to 10 ms on each call. But you must be absolutely sure, that your CLFN is set correctly as no error checking is done in this case and any small mistake in the parameters leads to LV crash. There might be an indirect use of this feature, e.g. manual fine-tuning/debugging of some problematic function or a function with unclear parameters etc. When the option is enabled and extFuncShowMagic is set, CLFN is painted red.

    Interesting. How does this feature compare with disabling the error checking on the Error Checking tab?

  7. 8 hours ago, Aristos Queue said:

    It wasn't removed from NXG. It has not yet been added. There are a sizable number of customers who lobby LV 20xx to remove it entirely, and others who want it left just as prominent as it is today. You can see one place where that discussion has been playing out: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Hide-Run-Continuously-by-default/idi-p/1521886

    NXG hasn't decided how to handle it yet, so they haven't added it. It's in the ToDo backlog to add it somewhere.

    Just make it an ini/preference setting. The main tenor of that thread seems to be "I'm not very precise so please remove it" which, from that low point, then devolves into "my work-flow is better than your work-flow".

  8. 13 hours ago, Ricardo de Abreu said:

    It sounds good! Really good!!!!!!!!!

    Can I still use a PC with a VI as a supervisor for the Arduino running the code?

    I've never used the toolkit; I'm just aware of it. I don't know of the limitations or capabilities outside of that page. I would suggest sending them an email explaining what you plan to do and they should be able to tell you.

  9. 150 samples @ 50ms is about 24K/s if the samples are double precision. The default TCPIP buffer in Windows is 8K IIRC and if NAGLE is on, you maybe filling the buffer too quickly. I would try U8, if you are currently using doubles, to reduce the data amount and see if the problem perists. If it resolves it, then I would try turning off NAGLE and increasing the buffer to 65K to use doubles again.

  10. 2 hours ago, Ricardo de Abreu said:

    Hi guys. This semester I'm starting a course system development for control and automation engineering, witch will be based on LabView. Therefore, my University doesn't have a NI hardware, even a MyRIO for us to test our VI and the teacher said that we should test our projects with our own Arduino...

    So, I have a little experience in Arduino and I know the basics for LabView. Now I'm in a point that I know that with Arduino I'll not take the best from LabView. I cannot even deploy a code to it.

    So, there is where my question comes in...
    I'm looking for a new board better then Arduino to use in the classes. I would buy a MyRIO card if I had the money but in Brazil this board is too expensive for me
     Witch one should I get that is closest to myRIO and less expensive than that? I would like to try de deployment of a VI and FPGA..... Is this possible?

    Thanks a lot for the help!
    Regards

    There is a LabVIEW RIO Evaluation Kit which is a fraction of the cost (has FPGA on board). Alternatively you could use the Arduino with Websockets or HTTP and use LabVIEW to communicate with it. There is also an Arduino toolkit, IIRC.

  11. On 2/12/2020 at 1:53 PM, Mads said:

    I was thinking of replacing a proprietary chunk based time series log file format that we use with SQLite, using this library. From a test I did some time ago I thought read performance would not be an issue (unlike for TDMS for example, which performs really badly due to the fragmentation you get when logging large non-continuous time series ), but now that I redid the tests I see that the read times get quite high (writes are less of an issue in this case as we normally just write smaller batches of data once a minute). I expected the database to be much slower, but the difference is of a magnitude that could be troublesome (no longer a real-time response).

    To give a simple example; if I have a table of 350 000 rows containing a time stamp (real) and 3 columns of  measurements (real),  and I select a time (sub)period from this table; it takes up to 0,6 seconds to retrieve the measurements on my test machine (using either a single SQL Execute 2D dbl, or the Select many rows template). For comparison the current binary file solution can locate and read the same sub-periods with a worst case of 0,03 seconds. 20x times faster.  This difference would not be an issue if the accumulated time per batch of request was still sub-second, but the end users might combine several requests, which means the total waiting time goes from less than a second ("instantaneous") to 5-30 seconds...

    Perhaps I was using @ShaunR 's library back when I was able to get this done "instantaneously", but that seems to no longer exist (?), and in this case I need a solution that will run on Linux RT too...Should it be possible to read something like this any faster, and/or do anyone have any suggestions on how to improve the performance? The data will have to be on disk, the speedup cannot be  based on it being in cache / memory...

    You have to be logged in to see the SQLite software.

    The main performance criteria for SQLite is the number of rows and/or columns returned/inserted, although there have been significant improvements in recent versions (~15% over 5 years). If you look at the performance graphs, you will see that 0.6 seconds equates to about 450k rows (with 2 columns). The performance test in the Sqlite API for LabVIEW library is based on 10k rows so that you can get repeatable figures and that typically yields 10s of milliseconds. Less than that, and LabVIEW timings become dominant for the purpose of that test.

    29d4f6bc231881d6cd008f71f8094818_f130.png.eea1e24f3e6ec2ac5589de63c40efa6a.png

    If you are dumping entire tables and performance is a requirement, then a relational database is the wrong tool.

  12. 2 hours ago, drjdpowell said:

    Why is whatever generated this JSON providing null values at all?  Especially in place of strings and integers?

    I imagine it is C , C++ or Javascript. NULL is a specific type of invalid pointer in C/C++ languages and "not an object" in Javascript (as opposed to "undefined"). In LabVIEW we don't really have either concepts.

    In JSON, it is a valid type so it depends on how you want to translate it back into a LabVIEW type. Historically I have converted in the following manner: string->empty, numeric->0, boolean->false etc. Unless, of course, it is in quotes. In that case it is the string "NULL", in whatever case, as it's implicity typed.

  13. 6 hours ago, raoul said:

    what advice would you give to someone who wants to learn how to make this kind of vi?

    Get someone else to do it.

    It requires knowledge of the memory organisation of LabVIEW and C (and how different structures are allocated and stored) and most of the time a small error will completely crash LabVIEW.

    7 hours ago, raoul said:

    Is there a book or a website you could recommend for C beginers ?

    I would suggest asking on a C forum.

  14. 1 hour ago, raoul said:

    It seems  user32.dll would be able to do that  . But I can’t find the way for this.

    If your looking for the Aero style blur, then that is achieved using "SetWindowCompositionAttribute". I haven't used it, I'm just aware of it, but you'll probably find examples on Github.

  15. 1 hour ago, drjdpowell said:

    Why the "Preallocated"; are you trying to have a firm limit of 12 clones due to the long-term running?  With "Shared Clones" this stuff has been implemented by numerous frameworks, but I have not seen a fixed-size pool of clones done.  I would implement such a thing with my existing Messenger-Library "actors", but with an additional "active clone count" to keep things to 12.

    If they have internal state memory then they have to be pre-allocated.

    18 hours ago, rharmon@sandia.gov said:

    What method of communications to and from clones would you suggest?

    TCPIP is the most flexible, works between executables and across networks. Events and queues if they are in the same application instance..

  16. 2 hours ago, hooovahh said:

    Okay I'm done guessing without actual testing in the environment.  In Windows -1 on the Read From Text File, and Read From Binary File both read the whole file.  I feel like there is a bug or two found in this thread.

    If you read a text file then it will work. However. as Jordan states; it is not an actual file. It's is a text output stream from the VFS.

    • Like 1
  17. 13 hours ago, hooovahh said:

    Oh yeah, just wire a -1 and you get the whole file.  I always forget that.

    That doesn't work. I would say it is a bug but it's never worked. ¯\_(ツ)_/¯.

    1 hour ago, Neil Pate said:

    Tried that in my original test, still did not work.

    This is what I have on my Linux box (not a cRIO though) so if you don't get anything; that is definitely a bug.

    image.png.e40727a45e05da28857b00425ae35b96.png

     

     

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.