Jump to content

ensegre

Members
  • Posts

    549
  • Joined

  • Last visited

  • Days Won

    25

Posts posted by ensegre

  1. I don't know exactly how your collection looks like and in what the NI Web Server as it is fills in, but as for linux and serving http I have been positively impressed by this: https://github.com/illuminated-g/lv-http-server

    In fact I did some preliminary evaluation of it some time ago, and I was planning to build on it for a project which has been delayed. If you look into, I'd be interested in hearing your opinion as well.

    • Like 1
  2. For my experience, when it is about lab equipment, each device comes along with its own serial communication command set, if not all together proprietary handling dll. Which creates a job case for a lab automation integrator like I act for, many times. Industry standard bus protocols are implemented only in devices which scale to factory floor. Robotic components, generic remote i/o and plant controllers are the exceptions which come to my mind, as they cater to the generic PLC ecosystem.

  3. When I tooled with programmatic generation of EIOnodes (not that this makes me an expert of them, nor that I remember too much of my trials and errors of then), I got the impression that the little set of EIO scripting VIs is all but complete and bug free, less to say documented. To understand the state cluster was way too esoteric for me, and I wasn't able to use AddChannel. In the end I achieved what I wanted by using ModifyChannels and SpecifyEIONode. Even with that the result was not yet sane, and cherry on the cream, the trick to fix up things automagically was to cut all I created and repaste it on the BD. Meaning to me, that to complete the operation the necessary methods are not exposed in the undocumented set given, but luckily some internal sanity cleanup is enforced when dropping clipboard contents.

    Maybe my task was easier, because I knew a priori the type of the terminal I wanted to connect to (i.e. U8 or int16 or boolean, determined by the name itself), so I only had to wire a control or an indicator of the right type to the node I created, and not to find out that type.

  4. Could someone kindly check and confirm this bug on some other installation? I'm having hard time convincing the correspondent Technical_Support_Engineer_NI_is_now_part_of_Emerson that the bug is reproducible (it is for me on two different Ubuntu 20.04).

    Subbug 1: (happens with any LV version I tried)

    cd ~/natinst/LabVIEW\ Data/Shared\ Library/
    rm HeaderParserResult.xml
    touch HeaderParserResult.xml
    labview64 --> select Tools/Import/Shared Library (.so) --> SIGSEGV

    Subbug 2: (happens with LV2023Q3f0 and f1)

    • copy the attached file in ~/natinst/LabVIEW\ Data/Shared\ Library/ (to save you from doing the process from scratch)
    • mkdir /tmp/LVimport
    • labview64 --> select Tools/Import/Shared Library (.so) --> Update VIs --> Next --> Next --> ... --> SIGSEGV

    HeaderParserResult.xml

  5. For the record (self note?)

    Narrowing down with some more use of ddd. It seems that there is a bug in 2023Q3 at the stage of the generation of the library. There is a segmentation fault in

    HEADER_PMSave () from /usr/local/natinst/LabVIEW-2023-64/resource/headerparser.so

    apparently when calling

    xercesc_3_2::IconvGNULCPTranscoder::transcode(char const*, char16_t*, unsigned long, xercesc_3_2::MemoryManager*) () from /usr/local/natinst/LabVIEW-2023-64/resource/libnixerces.so.3

    which produces an empty file /home/xxxx/natinst/LabVIEW\ Data/Shared\ Library/HeaderParserResult.xml. Once that empty file is created, any other version of labview will segfault when attempting to start the Import wizard. Removing that file allows earlier version of LV to complete the import process.

    Now if I would understand in which status we are under our SSP, Emerson, SAS transition or whatever I could report it as a bug, and get perhaps a CAR...

     

    Btw, when the empty xml is present, the crash occurs, in 2023Q3, at the 12th call of

    xercesc_3_2::XMLString::transcode(char16_t const*, xercesc_3_2::MemoryManager*) () from /usr/local/natinst/LabVIEW-2023-64/resource/libnixerces.so.3

    within

    HEADER_ProjectManagementInitialize () from /usr/local/natinst/LabVIEW-2023-64/resource/headerparser.so

  6. 13 minutes ago, ShaunR said:

    Can't be both ;) (and that's 10ms)

    Maybe I wasn't clear enough: replacing Compound arithmetic +++ with Multiply x3 in my BD I did get the same timing (in contrast with Mads), whereas using CompoundArithmetic x3 I got 10ms more. And now to further elaborate I put several variants of the x3 in a Diagram Disable, and surprise, times become ~150ms for all variants but ~144ms for Multiply x3. But back on demo2,vi, I also now get ~150ms instead of ~120. Say compiler optimizations, cache, or I don't know what.

    12 minutes ago, ShaunR said:

    However. You have a timing issue in the way you benchmark in your last post. The middle gettickcount needs to be in it's own frame before the for loop.

    Formally you're right, but in this case I observed no difference - I guess the gettick gets executed as soon as possible when entering the frame, and on my system that's early enough, even if not guaranteed to be the first operation

    demo2+.vi

  7. 6 minutes ago, ShaunR said:

    What's interesting about ensegre's solution is the unintuitive use of the compound arithmatic in this way. There must be a compiler optimization that it takes advantage of.

    in my case I don't see appreciable differences between x3 and compound +++. Maybe there is something platform dependent, if at all.

  8. I'm getting immediate segmentation faults when trying to run the import .so wizard, on an ubuntu 20 machine where I have installed LV14, 19, 2021 and 2023 (up to date). All versions of LV appear to behave normally during my usual workflow, but now crash immediately after choosing Tools/Import Shared Library, and displaying the first window of the wizard. I tried to launch LV through ddd, and the only hint I'm able to get is that there is a segmentation fault in early /usr/local/natinst/LabVIEW-XX/resource/headerparser.so.

    I've tried different versions, running LV as root, clearing cache at no avail. Only a couple of times I was able to go through the dialogs the wizard down to the point of starting the compilation (after choosing .so, .h, include dir, functions to wrap, arg types), but since then not again. Any idea about what could be wrong and how could I debug further?

  9. 48 minutes ago, Elbek Keskinoglu said:

    I tried to mean that I am putting the experimental setup at the Front Panel inside of the Cluster. So when there are graphs and charts they will be transformed to the JSON format to be saved in a file.

    This text has been generated by AI, right? Because it reads like that BS...🤣

    We're getting old. In my time, we were just happy laughing about papers generated by "context free grammar generators".

  10. Coming back to it, after one year.

    It turned out, my installation of the worker does run, it was only the gnome menu item which fails: its associated command

    /bin/bash -c 'cd /usr/local/natinst/nifpgacompileworker/ && /usr/local/natinst/nifpgacompileworker/cw_wrapper.sh mono /usr/local/natinst/nifpgacompileworker/CompileWorker.exe'
    

    fails because of mono, as reported above. Omitting mono, or even just running /usr/local/natinst/nifpgacompileworker/CompileWorker.exe directly, the worker comes up, and I do connect to a Windows machine on which LV14 runs as a compilation server. So far so good.

    1390248686_Screenshotfrom2022-12-2216-59-14.png.479cfa62353ca5151fb581144ab6de59.png   image.png.b3fc19dd3b95c3b05b7bc8fdc909f488.png

    Now if I launch the FPGA compilation on the Windows machine, the preliminary steps are done, the job is submitted to the linux worker (I see logs of it both in the compilation window and on stdout of the linux machine). However, the compilation errors out at its very beginning. The log tells essentially

    /usr/local/natinst/NIFPGA/programs/xilinx14_7/ISE/bin/lin/_cg: error while loading shared libraries: libSM.so.6: cannot open shared object file: No such file or directory

    which looks to me as a bitness problem. To test, I have tried to launch

    $ export LD_LIBRARY_PATH=$DIR:/usr/local/natinst/NIFPGA/programs/xilinx14_7/ISE/lib/lin/:$LD_LIBRARY_PATH; /usr/local/natinst/NIFPGA/programs/xilinx14_7/ISE/bin/lin/_cg
    /usr/local/natinst/NIFPGA/programs/xilinx14_7/ISE/bin/lin/_cg: error while loading shared libraries: libSM.so.6: cannot open shared object file: No such file or directory

    whereas I note that I have also lin64 directories, and indeed to call

    $ export LD_LIBRARY_PATH=$DIR:/usr/local/natinst/NIFPGA/programs/xilinx14_7/ISE/lib/lin64/:$LD_LIBRARY_PATH; /usr/local/natinst/NIFPGA/programs/xilinx14_7/ISE/bin/lin64/_cg
    The XILINX environment variable is not set or is empty.

    seems to bring one step forward.

    Any idea how I could move on from here? I thought of renaming various lin64 directories to lin, but it looks to me like running into a rabbithole.

    Additional detail, in the meantime I upgraded to "LabVIEW 2020 FPGA Compilation Tool for ISE 14.7". These are not yet officially supported on Ubuntu, so alien --install --scripts.

  11. Quote

    $ ldd /usr/local/natinst/LabVIEW-2021-64/resource/libmuparser-x64-lv.so
    /usr/local/natinst/LabVIEW-2021-64/resource/libmuparser-x64-lv.so: /usr/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /usr/local/natinst/LabVIEW-2021-64/resource/libmuparser-x64-lv.so)
    /usr/local/natinst/LabVIEW-2021-64/resource/libmuparser-x64-lv.so: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /usr/local/natinst/LabVIEW-2021-64/resource/libmuparser-x64-lv.so)
        linux-vdso.so.1 (0x00007ffcd6953000)
        libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007f2da3b77000)
        libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f2da3995000)
        libm.so.6 => /usr/lib/x86_64-linux-gnu/libm.so.6 (0x00007f2da3846000)
        libgcc_s.so.1 => /usr/lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f2da382b000)
        libc.so.6 => /usr/lib/x86_64-linux-gnu/libc.so.6 (0x00007f2da3639000)
        libdl.so.2 => /usr/lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2da3633000)
        libpthread.so.0 => /usr/lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2da360e000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f2da3c67000)

     

    • Like 2
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.