Jump to content

ensegre

Members
  • Posts

    565
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by ensegre

  1. My 2c: I guess that the problem with an expiration date on an offline computer is that you have no means for the executable to verify that they didn't set the clock backwards to extend indefinitely their usage. If you don't expect them to be pro hackers, what about protection by simple obfuscation? E.g. the incremental time the program has run, saved periodically in obfuscated form in an essential key file, masked as "configuration"? With some mechanism to make more complicate to get through just by rewriting an older file in place of it?
  2. No pb for me as well, neither today nor past times I consulted it. But all together I'm on it quite seldom.
  3. Coming back to report. Scavenging the net I've found essentially three set of connectors between Redis and LV: what can be downloaded from https://forums.ni.com/t5/Example-Code/REDIS-database-LabVIEW-toolkit/tac-p/3508611 taking into account the corrections listed in the thread. This seems to be the more widespread, considering even that it was shown as an option at the CERN LV user group this year (see https://indico.cern.ch/event/1388470/contributions/5911487/attachments/2843544/4971934/lugm_LabVIEW_at_CERN.pdf, slide 22). Dates originally to 2014. Nick Folse's https://github.com/tauterra/Redis-Client-for-LabVIEW of about three years ago, according to its author no further developed. Found a couple of flaws, easily corrected. https://github.com/Bas-vE/LV-Redis , which claims to be an evolution of 1., promoted to LVOOP. Most recent of the three. The philosophy of the three toolboxes differs somewhat from one to the other, the first one being more of the kind "one VI for each Redis command", the others putting perhaps more the accent on the transaction protocol than on the completeness of the commands implemented. Redis's huge command set also expanded during the years in question. However, I found in all three something which looks to me a bit of a no brainer, which is that TCP client connection are opened and then closed for each elementary operation. While that might have a minor performance impact, I found that the approach prevents Redis' MULTI pipelining. I have forked 1. in https://github.com/EastEriq/redis-in-labview and 2. in https://github.com/EastEriq/Redis-Client-for-LabVIEW for dwelving into. Finally, I have resolved for adopting my fork and augmentation of 1. in my project, but only after I modified it so that TCP connections can be kept open throughout the client sessions.
  4. IIUC the OP, s/he put an AI node (which has a variable execution time) sequenced with a fixed delay inside a while loop, and then complains that the while loop is not repeating itself at 1/delay time. OTOH s/he says that s/he doesn't really need the AI. I'd answer here that, complexity permitting, since this is a deterministic target, the delay should be concurrent with the code executing in variable time, and that it should be longer than that execution time of the variable part. Then one iteration of the while loop would be guaranteed to take exactly as long as the delay. Another option, if memory doesn't fail me, could be a SCTL tied to a secondary time reference, running at a submultiple of the master clock. If the complexity of the code doesn't allow execution within the prescribed timing, the compiler will then complain. Can't say about the actual case, but often complex code can be simplified by factoring out, or by pipelining.
  5. AFAIR, from my very limited experience with a single model of FPGA, AI and AO conversions may take a long, variable number of clock cycles (dependent on routing perhaps, of the order of several tens of cycles), and therefore cannot sit in a SCTL. Don't take me literally though, I might be wrong and that may not be true for all FPGA boards.
  6. ah, and when you're mentioning two computers - connected how? Is there some network gear along the way, which filters packets and blocks connections on unauthorized ports?
  7. Sounds like on your computer port 45321 is already in use by some process (not necessarily labview). Besides, 45321 is still in the IANA range, whereas 58411 is already in the Dynamic range. https://stackoverflow.com/questions/133879/how-should-one-go-about-choosing-a-default-tcp-ip-port-for-a-new-service https://www.baeldung.com/cs/default-port-network-service On top of that, you may have lingering.
  8. Tools/Profile/Show buffer allocations...
  9. well, NSV are out of cause here first because it's a linux distributed system, second because of their own proven merits 😆... The background is this, BTW. We have 17 PCs up and running as of now, expected to grow to 40ish. The main business logic, involving the production of control process variables, is done by tens of Matlab processes, for a variety of reasons. The whole system is a huge data producer (we're talking of TBs per night), but data is well handled by other pipelines. What I'm concerned with here is monitoring/supervision/alerting/remediation. Realtiming is not strict, latencies of the order of seconds could even be tolerated. Logging is a feature of any SCADA, but it's not the main or only goal here; this is why I'd be happy with a side Tango or whatever module dumping to a historical database, but I would not look in the first place into a model "first dump all to local SQLs, then reread them and merge and ponder about the data". I'd think that local, in-memory PV stores, local first level remediation clients, and centralized system health monitoring is the way to go. As for the jenga tower, the mixup of data producers is life, but it is not that EPICS or Tango come without a proven reliability pedigree! And of course I'd chose only one ecosystem, I'm at the stage of choosing which. ETA: as for redis I ran into this. Any experience?
  10. Reviving this thread. I'm looking for a distributed PV solution for a setup of some tens of linux PCs, each one writing some ten of tags at a rate of a few per sec, where the writing will mostly be done by Matlab bindings, and the supervisory/logging/alerting whatnot by clients written in a variety of languages not excluding LV. OSS is not strictly mandatory but essentially part of the culture. I'd would be looking at REDIS, EPICS and Tango-controls (with its annexes Sardana, Taurus) in the first place, but I haven't yet dwelled into them order to compare own merits. In fact I had a project where I interfaced with Tango some years ago, and I contributed cleaning up the official set of LV bindings then. As for EPICS, linux excludes the usual Network Shared Variables stuff (or the EPICS i/o module), but I found for example CALab which looks on spot. Matlab bindings seem available for the three. The ability of handling structured data vs. just double or logical PV may be a discriminant, if one solution is particularly limited in that respect. Has anyone recommendations? Is anyone aware of toolkits I could leverage onto?
  11. Under Linux, it is in the Configure tab of the compile worker, see https://lavag.org/topic/22267-installing-ni-lvfpga-un-ubuntu-20/?do=findComment&comment=144693 . Under Windows? At first sight I haven't found anything relevant in C:\Program Files (x86)\National Instruments\FPGA\CompileWorker .
  12. https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019MqkSAE https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000PARmSAO https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000004AfbSAE https://forums.ni.com/t5/LabVIEW/Regarding-the-UI-Thread-Execution-System-and-Multi-threading/td-p/3986192
  13. I have worked in the past on ADAM5000TCP controllers + modules using this nice toolbox, so it is certainly possible. Unfortunately that was years ago, I don't have access to the hardware anymore and I don't remember details, so I can't be specific. What you describe, though, sounds simply like not querying the right IP, or not having set correctly the netmask on your NIC and in the ADAM (was there an option to do that somewhere? In ADAMview?). Assuming, obviously, that it is not because of a faulty ethernet cable (happens, too). I also think that at some point in the Modbus tester there was a small bug like that if you once had a connection refused (e.g. because of querying a nonexistent register) you had to stop and restart the tester in order to establish the connection again, but my memory is vague on that. Maybe Porter corrected that since then.
  14. Frankly, It seems more that you don't know what you want, than that you don't know how to do it on a FPGA.
  15. I don't know exactly how your collection looks like and in what the NI Web Server as it is fills in, but as for linux and serving http I have been positively impressed by this: https://github.com/illuminated-g/lv-http-server In fact I did some preliminary evaluation of it some time ago, and I was planning to build on it for a project which has been delayed. If you look into, I'd be interested in hearing your opinion as well.
  16. your attachment links are broken, could you see if you can correct them?
  17. For my experience, when it is about lab equipment, each device comes along with its own serial communication command set, if not all together proprietary handling dll. Which creates a job case for a lab automation integrator like I act for, many times. Industry standard bus protocols are implemented only in devices which scale to factory floor. Robotic components, generic remote i/o and plant controllers are the exceptions which come to my mind, as they cater to the generic PLC ecosystem.
  18. When I tooled with programmatic generation of EIOnodes (not that this makes me an expert of them, nor that I remember too much of my trials and errors of then), I got the impression that the little set of EIO scripting VIs is all but complete and bug free, less to say documented. To understand the state cluster was way too esoteric for me, and I wasn't able to use AddChannel. In the end I achieved what I wanted by using ModifyChannels and SpecifyEIONode. Even with that the result was not yet sane, and cherry on the cream, the trick to fix up things automagically was to cut all I created and repaste it on the BD. Meaning to me, that to complete the operation the necessary methods are not exposed in the undocumented set given, but luckily some internal sanity cleanup is enforced when dropping clipboard contents. Maybe my task was easier, because I knew a priori the type of the terminal I wanted to connect to (i.e. U8 or int16 or boolean, determined by the name itself), so I only had to wire a control or an indicator of the right type to the node I created, and not to find out that type.
  19. Oh, thx for the insider information. So there has been a quick followup, comforting to know.
  20. Could someone kindly check and confirm this bug on some other installation? I'm having hard time convincing the correspondent Technical_Support_Engineer_NI_is_now_part_of_Emerson that the bug is reproducible (it is for me on two different Ubuntu 20.04). Subbug 1: (happens with any LV version I tried) cd ~/natinst/LabVIEW\ Data/Shared\ Library/ rm HeaderParserResult.xml touch HeaderParserResult.xml labview64 --> select Tools/Import/Shared Library (.so) --> SIGSEGV Subbug 2: (happens with LV2023Q3f0 and f1) copy the attached file in ~/natinst/LabVIEW\ Data/Shared\ Library/ (to save you from doing the process from scratch) mkdir /tmp/LVimport labview64 --> select Tools/Import/Shared Library (.so) --> Update VIs --> Next --> Next --> ... --> SIGSEGV HeaderParserResult.xml
  21. For the record (self note?) Narrowing down with some more use of ddd. It seems that there is a bug in 2023Q3 at the stage of the generation of the library. There is a segmentation fault in HEADER_PMSave () from /usr/local/natinst/LabVIEW-2023-64/resource/headerparser.so apparently when calling xercesc_3_2::IconvGNULCPTranscoder::transcode(char const*, char16_t*, unsigned long, xercesc_3_2::MemoryManager*) () from /usr/local/natinst/LabVIEW-2023-64/resource/libnixerces.so.3 which produces an empty file /home/xxxx/natinst/LabVIEW\ Data/Shared\ Library/HeaderParserResult.xml. Once that empty file is created, any other version of labview will segfault when attempting to start the Import wizard. Removing that file allows earlier version of LV to complete the import process. Now if I would understand in which status we are under our SSP, Emerson, SAS transition or whatever I could report it as a bug, and get perhaps a CAR... Btw, when the empty xml is present, the crash occurs, in 2023Q3, at the 12th call of xercesc_3_2::XMLString::transcode(char16_t const*, xercesc_3_2::MemoryManager*) () from /usr/local/natinst/LabVIEW-2023-64/resource/libnixerces.so.3 within HEADER_ProjectManagementInitialize () from /usr/local/natinst/LabVIEW-2023-64/resource/headerparser.so
  22. Maybe I wasn't clear enough: replacing Compound arithmetic +++ with Multiply x3 in my BD I did get the same timing (in contrast with Mads), whereas using CompoundArithmetic x3 I got 10ms more. And now to further elaborate I put several variants of the x3 in a Diagram Disable, and surprise, times become ~150ms for all variants but ~144ms for Multiply x3. But back on demo2,vi, I also now get ~150ms instead of ~120. Say compiler optimizations, cache, or I don't know what. Formally you're right, but in this case I observed no difference - I guess the gettick gets executed as soon as possible when entering the frame, and on my system that's early enough, even if not guaranteed to be the first operation demo2+.vi
  23. I don't want to be picky, but with that solution I get the same ~117ms with compound +++, twice + x3 and 3x, whereas ~127ms with compound arithmetic 3x or x3. Platform and optimizations 🤷‍♂️
  24. in my case I don't see appreciable differences between x3 and compound +++. Maybe there is something platform dependent, if at all.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.