-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Ahhh I misunderstood you. I thought you were referring to my own library that I had attached. But you are referring to the muParser library however. Can't help you with that one as I don't know the details of the actual muParser library that implements the logic in the background.
- 172 replies
-
Have you read the documentation text file included? There is a function rand() which calls the LabVIEW Random Number node and returns its values. Do not expect crypto quality randomness from this. The LabVIEW Random Number generator has been repeatedly investigated and found to have a reasonable randomness but with a limited interval. For most simple requirements that is quite enough, but if you need real crypto quality randomness there would need to be done a lot more serious work and then you quickly can forget to find this as a free library. As to now() that's a bit tricky. The entire formula parser only really operates on doubles internally and doesn't have any other types. The newly hacked in bitwise operators were simply added by converting the double to an U64 for doing the bitwise operation then store it back as a double on the value stack. That should do for most bitwise operations for up to 32- bit integers but can start to get inaccurate if you chain many bitwise operators in a formula. So what would you expect the now() to produces? A double representing the number of seconds since January 1, 1904 GMT, as the LabVIEW epoch is? Or rather since January 1, 1970 UTC as the Unix epoch is? Or maybe rather the number of days since January 0, 1900 as the Excel epoch, or would January 1, 1600 UTC be better as the Windows SYSTEMTIME is? You see lots of possibilities and all of them equally right or wrong, so avoiding that problem by not implementing it is simply the easiest solution. 😁
- 172 replies
-
- 1
-
And really a nitpick but your title is rather inaccurate :-). There is no opposite of MoveBlock(). The only opposite you can have is by swapping the source and target pointer here. Other than that I'm not sure what other opposite you could think off here.
-
You may want to try with this library. No guarantees about its proper operation. It's a quickly hacked together version from this library that I posted earlier. It's not really tested for the extra bitwise operators and there is no provision for correct left and right association of these operators, so it might require explicit bracketing to work as expected unlike in other languages and formula parsers that tend to follow the mathematical and/or C style conventions. LabVIEW 2018 for now. ExprEval.zip
- 172 replies
-
- 1
-
That's either a very old DSNewPtr from NI (before LabVIEW 2009 which introduced support for 64-bit code and according pointer sized integers) or a non-official user created CLN. This should be a pointer sized integer since the definition for the size parameter changed with LabVIEW 2009 from an int32 to a size_t just as with the MoveBlock. It shouldn't really cause trouble though since that int32 is anyhow sign extended to a 64-bit stack parameter on 64-bit LabVIEW, and technically a LabVIEW pointer can't really span more than 2^32 bytes without causing other related problems deep down in the memory manager. The return value of the DSNewPtr() seems to be correctly configured as pointer sized integer and definitely needs to remain that way. It is not where you potentially have to use the conditional code structure. The value for the pointer size to allocate might however have to be adjusted. As long as the structure only contains the pointer you simply can allocate 8 bytes and treat it that way, effectively having the upper 4 bytes be unused in the 32-bit case. Once that structure gets more complex and/or the pointer offset is not at 0 however or there follows data in the structure beyond the pointer, you have to adjust the whole offset and size values according to the bitness, to not cause it to overwrite the wrong location. Basically when you configure a Call Library Node parameter to be a pointer sized variable, LabVIEW will treat it as 64-bit integer on the diagram but do the RIGHT thing depending on the bitness it is running on. It will use the entire 64-bit when it is running as 64-bit process, and the lower 32-bit when it is running as 32-bit process. And a returned value will be sign extended (if you use a signed pointer sized integer) when running on 32-bit and zero padded (if you use an unsigned pointer sized integer) on 32-bit. Nothing special needs to be done when running on 64-bit. This is because LabVIEW is a strictly typed compile time environment and the developers wanted to keep the flattened format consistent across platforms. If they would have chosen for a special pointer datatype that is to be sized according to the current environment, flattening such structures and variables would cause significant problems and the flattening of data is not only used when you explicitly add a Flatten or Unflatten node in your diagram but a very fundamental part in many locations in LabVIEW including the entire VI Server interface but also in some areas of handling the connector pane of VIs.
-
No no! The pointer size depends on the application environment, not the kernel environment. As long as you stay in 32-bit LabVIEW, pointers will be 32-bit no matter what Windows version you are in. But!!!!! Be prepared! Windows is the only LabVIEW platform that is still 32-bit too. (Well ok LabVIEW RT on ARM is also 32-bit but that is an entirely different story). All other platforms (Mac and Linux) have NO 32-bit version of LabVIEW anymore since around LabVIEW 2017. And the LabVIEW for Windows 32-bit countdown has certainly started already. Once NI has fully ported every Toolkit to 64-bit (or discontinued it) expect the LabVIEW 32-bit version to be discontinued within a year or two at most. So if they manage to get the cRIO and myRIO support finally updated to 64-bit (it's about time since at least 5 years), and the Linx Toolkit is 64-bit too, it's bye bye 32-bit LabVIEW. The DSNewPtr() is the right approach to use. You are basically managing your own memory according to the requirements of the API that you call, since the LabVIEW management contract isn't compatible with it, and it couldn't be compatible with all the possible ways memory can be handled. .Net Interop has a whole slew of support functions to try to deal with such situations and even that isn't always sufficient to provide a solution for every possible situation without very involved and convoluted code constructs. It's the main crux of trying to marry different APIs together.
-
This works but is bound with troubles. A LabVIEW array is dynamic as LabVIEW is a fully managed programming environment. No it is not .Net managed, at the time the LabVIEW developers designed the basics that are valid until today, .Net was not even an idea on earth, let's not talk about a fact. But it is managed and the LabVIEW runtime handles that all behind the curtains for you. This means that a LabVIEW variable, and especially a handle that arrays and strings are, is only guaranteed to be valid for the duration of the Call Library Node call itself. After that node returns, any passed in array, string or even scalar variable can at any point be resized, relocated or even simply get deallocated. So the pointer that you get in this way can very well get invalidated immediately after that Call Library Node returns. For performance reasons, LabVIEW tries to maintain arrays and strings for as long as possible when it can, but to decide if it can and if it propritizes this rule above other possible rules to improve performance is a tricky business and can even change between LabVIEW versions. It is pretty safe to assume that an array or string wire that you wire through a Call Library Node, doesn't branch into other nodes and is wired to the end of the current diagram structure, is left untouched for the duration of this diagram structure. But even that is not something the LabVIEW management contract guarantees. It's just the most prudent thing to do in almost any case to not sacrifice performance. Once you have a branch in the wire before or after the Call Library Node to retrieve the internal data pointer in the handle, or you do not wire the array data to the diagram structure border, all bets are open to if and when LabVIEW may decide to modify that handle (and consequently invalidate the data pointer you just got).
-
Glad you found the solution. It definitely is THE right approach if you want to avoid going into C code yourself. Just watch out about bitness. This does not work without conditional compiled structure if you want to make it 32/64-bit compatible.
-
You can a bit of that trick too 🙂
-
It's probably my limited command of the English language, but for me this sounds about as intelligible as a dissertation about the n-th dimensional entanglement of virtual particles between different universes.
-
10Gb Ethernet
Rolf Kalbermatter replied to infinitenothing's topic in Remote Control, Monitoring and the Internet
100% CPU load on the server would indicate some form of "greedy" loop. If you create a loop in LabVIEW that has no means of throttling its speed it will consume 100% of the CPU core it is assigned to, even if there is nothing in the loop and it does effectively do nothing very fast. More precisely, that loop will consume whatever is left over of that core after other VIs clumps had their chance to snoop some time of from that core. -
10Gb Ethernet
Rolf Kalbermatter replied to infinitenothing's topic in Remote Control, Monitoring and the Internet
Definitely echo Hooovahh's remark. LabVIEW TCP Nodes may limit the effectively reachable throughput since they do their own intermediate buffering that adds some delays to the read and write operations, but they use select() calls to asynchronously control the socket, which should do a highly efficient yield on the CPU when there is nothing to do yet for a socket. And the buffer copies itself should not be able to max out your CPU, 2Gbps comes down to 250MBps, which even if you account for double buffereing once in LabVIEW and once in the socket, should not be causing a 100% CPU load. Or did you somehow force your TCP server and client VIs into the UI thread? That could have pretty adverse effects but would also be noticeable in that your LabVIEW GUI starts to get very sluggish. -
I haven't tried it but in your minimal C wrapper you should be able to install a SIGTERM handler in this way and in there you could call a second export in the shared library to inform your LabVIEW program that it needs to shut down now! #include <signal.h> #include "SharedLib.h" void handler(int signum) { SharedLibSignal(signum == SIGTERM); } int main() { struct sigaction action; memset(&action, 0, sizeof(action)); action.sa_handler = handler; if (sigaction(SIGTERM, &action, NULL) == -1) { perror("sigaction"); exit(EXIT_FAILURE); } return SharedLibEntryPoint(); }
-
write choosing bit in holding register
Rolf Kalbermatter replied to oplc m's topic in LabVIEW General
What code do you use? What device? Your address seems to indicate the Modbus decimal addressing scheme. Most LabVIEW ModBus libraries I know of use however the more computer savy hexedecimal naming scheme with explicit register mode selection. This means you need to remove the first digit from your address (the number 4) and decrement the remaining address by one to get a zero based address. However Modbus Function Code 4 is a (Read Input Registers) operation and there is no (Write Input Register operation as it would not make any sense to write to an input. Read Holding Register would be an address starting with 3 and Write Holding Register would start with 6. So when using the NI Modbus library for instance in order to read your Modbus address 40001 you would need to use the Read Modbus function, selecting the Input Register group and passing an address of 0. There is no possibility to write to the input registers. For Holding Registers the Modbus address would be 30001 for reading and 60001 for writing. And when using the LabVIEW Modbus library you would select the Read and Write function respectively, selecting the Holding register and passing an address of 0. -
i'm not going to fight anyone. I simply don't use LGPL software. 😁
-
I would think a link to the original projects website that has the downloads available could also suffice. Of course that leaves you in a bit of a bind if the original developer site goes down or is otherwise made unavailable.
-
Only if you make absolutely no changes to the library and use some form of dynamic linking. If you make any change to the LGPL portion, you are obligated to distribute that change to any user of your software who asks for it. And if you don't use dynamic linking, your entire project gets part of the "work" that the LGPLed library presents. There exists no broadly accepted technology that lets you replace static linked libraries in an end product with some other libraries. LabWindows/CVI has/had a technique that lets you load actually lib files as if they were shared libraries, but that was a highly CVI specific feature that no other compiler that I'm aware of really supports.
-
Personally I think the differences between MIT, BSD, Apache and Commons like licenses are fairly small. And unless your project ends up being a huge success that storms the world (fairly small chance in the LabVIEW world for that 😁) you won't notice a real difference between them. The ones that clearly stand apart from these are the GPL and LGPL licenses which, while open source too, try to force any user of it (to some smaller degree with the LGPL) to open source their entire code too.
-
Actually they might, but the Mac does form an extra obstacle. For Linux they were pretty much on track to provide finally a real DAQmx driver since they had to develop it for their cRIO platform anyhow. The only problem is the packaging as there are not only at least three different package formats out there (rpm, deb and opkg, with the last one used for the NI embedded platforms) but also many other egregious differences between distributions that make installing a hardware support package a complete pain in the ass. And that is not even mentioning the kernel folks war to fight against allowing non-open sourced kernel drivers to run in their kernel. That is in the nature of the beast. These platforms have fairly differing ideas about layout composition in the underlying graphics subsystem and to make matters not to easy there always remains the issue about fonts and their licensing which makes transfering a layout pixel accurate across systems pretty much impossible. Unfortunately the LabVIEW folks choose to implement a pixel based UI system rather than an arbitrary graphics coordinate system but that is understandable. The only platform back in those days that had some form of more abstract coordinate system was Quickdraw on the Mac (and XWindow also has some more abstract coordinates as it was from begin designed to be a remote API where the client did not know, nor care, about the actual graphics hardware used on the server, and sometimes the server has no graphics hardware of its own). Windows GDI was purely pixel oriented and that cost Microsoft a lot of hacks later on to support high resolution displays in Windows. GDI still in essence is pixel based to the current day and that is the API LabVIEW uses to this day to draw to the screen.
-
Actually, supporting the M1/M2 chip would not be such a big deal for NI as far as LabVIEW goes. The LLVM compiler backend they use already supports that chip for quite some time. And the MacOSX version of LabVIEW itself shouldn't really give to many problems to compile with XCode for targetting the M1 hardware since they already did that for quite some versions before and the 64-bit version of LabVIEW for Mac did away with quite a few of the older Carbon compatibility interfaces. What will be trickier is support for DAmx, NI-488.2 and other hardware interface drivers. Not impossible but quite a bit of work to get right and the most intense part is probably all the testing needed.
-
Any hardware driver is almost certain to not work. Those hardware drivers depend on kernel drivers that must run in the Windows kernel. And it is almost certainly not possible to fully emulate the x86 hardware in ring1, which is the CPU mode in which the kernel executes. Emulating that part with all the ring context switches that must occur whenever the code execution transitions between kernel and user space is something that no CPU emulator does get fully right to this day. Same issue exists when you try to run x64 LabVIEW on a M1 Apple. LabVIEW itself works with minor tweaks to some MacOSX configuration settings for the LabVIEW application but don't try to get any hardware driver installed if you do not like to brick your MacOSX installation. Rosetta2, which is the Apple equivalent for emulating an x64 CPU on the M1 does a remarkable job for user space code but Apple explicitly states that it can NOT emulate an x64 CPU in kernel space and actively tries to prevent the system from trying to install one anyhow.\ I suppose Apple might have been able to create a Rosetta version that even works for kernel mode code but I have a strong suggestion that they wanted this to work before the end of this decade, so purposefully limited the scope to only emulate user space code. 😁
-
It definitely wasn't free last time I checked. This page would agree with that. Also the product page would agree too. You need the Deployment license for every computer you want to run an executable that uses the DSC module. There are a few functions of the DSC module that do not necessarily require a license. Maybe the user manager component is part of that. Yes it is Windows only, 32-bit only and pretty much depreciated
-
Anyone know of online User Group options?
Rolf Kalbermatter replied to FyreTitan's topic in Certification and Training
There might be no difference technically. But there is one in terms of acknowledgment that you did attend. Typically there is some verification with the user group organizer when someone claims to have been attending one. If you watch a recording it would be hard for NI to verify that you did so and not just claim to have done so. -
Lost .Net Path when create exe
Rolf Kalbermatter replied to Bobillier's topic in Calling External Code
What System DLL? If it is a strongly named assembly and resides in the Global Assembly Cache, something is wrong. If it doesn't reside there it is NOT a system assembly for sure! If adding it to your directory where the exe file itself resides does not help it depends on other DLLs/assemblies and you need to find out which and add them to your exe directory too. -
Very possible since Mac is technically Unix too, BSD Unix at that but still Unix. Intel tries to make their compiler behave as what the platform expects. Microsoft tends to try to make it as they feel is right. Although I would expect their Visual Studio Code platform to at least have a configurable switch somewhere in one of many configuration dialogs to determine if it should behave like GCC on non Windows platforms in this respect. It's not like there would be much of a problem to add "yet another configuration switch" to the zillion already existing ones.