Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. gcc should only be required if you intend to recompile the shared library yourself. However glibc compatibility between different compiler versions always is a pitta. Usually compiling with the oldest version you expect to be used is best.
  2. That means that your system misses some dependencies. To solve this we would need to have a list of possible dependencies and their version that this library may have. Aside from obvious dependencies that should be apparent to anyone having compiled this library you would also want to know the system and gcc version on which it was compiled. Depending on that there might be various other dependencies that your Ubuntu system may or may not come preinstalled with in the correct version.
  3. It's guessing but I could imagine that there is actually a situation possible where the Test Stand Editor doesn't know about the Friendship of objects but the user may want to select a Community scoped class anyhow. If that class is then executed in a context that has friendship relationship it still succeeds. Otherwise it gives a failure. Bad UX maybe. A feature that makes things possible that should be possible, quite likely. Fixing that may require to teach TestStand about LabVIEW implementation specific details and present its test adapters in a way that forces dependencies into the TestStand paradigma that it doesn't really care about otherwise. Likely weeks or months of extra work and a brittle interface that can fall flat on its nose anytime LabVIEW makes subtle changes to something in these implementation specific private features. Much safer to leave this inconsistency in there, save time, sweat and money and call it a day.
  4. The 32-bit version may be difficult to test. 2016 was the last version that had a 32-bit version of LabVIEW both on Mac and Linux. After that it was 64-bit only. Not sure how many people still have a 2015 or 2016 installation of those.
  5. Or you can execute this VI to know if your process is elevated or not. Is Elevated.vi
  6. I'm not sure what you want to do. All rows from a 2D array would sound like the whole 2D array to be. When you mean a particular column that contains values for every row, just check out the Index Array function. If you connect a 2D array to its input it will expand to have one index and one size input per dimension. Connect the index value you want to have for either a particular row or column to extract and leave the rest unconnected.
  7. No you can't launch VIs as administrator. You need to launch LabVIEW (or VIPM) as such in order to have a VI executed with administrator rights. Windows does not allow to change the privilege of a process after it is launched, respectively if you find a way to do that you found a zero day hacker exploit, that will be closed immediately as soon as Microsoft learns about it. What you could do is to add an executable that was configured to need administrator rights through a manifest file to the installation package and then launch that. It will cause a privilege escalation dialog when you launch that, and require the user to enter the login credentials for an admin if he isn't already admin. The dialog will appear anyhow even if he is an admin already, but as admin the user won't have to enter the password again, but still confirm that he does want to have that executable launched.
  8. While your previous approach might pose problems depending on what you intend to do with the data, as the number of read samples can be very variable, your current approach sounds honestly corner case. What do you mean samples per channel being 1000? Is that at the Create Task? This would be the hint for DAQmx about how much buffer to allocate and should be actually higher than the number of samples you want to read per iteration. My experience is that one read per 10 ms is not safe under Windows, but 1 read per 100 to 200 ms can sustain operation for a long time if you make the internal buffer big enough (I usually make it at least 5 times the intended number of read samples per interval).
  9. It is not! All the language interfaces they have on that page are simply wrappers around the DLL. Some more complete than others. The C# one seems to import all the functions (well at least a lot), the LabVIEW wrapper is extremely minimalistic.
  10. That loop looks nice, but I prefer to use Initialize Array. 😀 But I'm pretty sure the generated code is in both cases pretty similar in performance. 😁
  11. No! The DLL also operates on native memory just as LabVEW itself does. There is no Endianness disparity between the two. Only when you start to work with flattened data (with the LabVIEW Typecast, but not a C typecast) do you have to worry about the LabVIEW Big Endian preferred format. The issue is in the LabVIEW Typecast specifically (and in the old flatten functions that did not let you choose the Endianness). LabVIEW started on Big Endian platforms and hence the flatten format is Big Endian. That is needed so LabVIEW can read and write flattened binary data independent of the platform it works on. All flattened data is Endian de-normalized on importing, which means it is changed to whatever Endianness the current platform has so that LabVIEW can work on the data in memory without having to worry about the original byte order. And it is normalized on exporting the data to a flattened format. But all the numbers that you have on your diagram are always in the native format for that platform! Your assumption that LabVIEW somehow always operates in Big Endian format would be a performance nightmare as it would need to convert every numeric data every time it wants to do some arithmetic or similar on it. That would really suck great time! Instead it defines an external flattened format for data (which happens to be Big Endian) and only does the swapping whenever that data crosses the boundary of the currently operating system. That means when streaming data over some byte channel, be it file IO, or network or a memory byte stream. And yes, when writing a VI to disk (or streaming it over the network to download it to a real-time system for instance), all numeric data in it is in fact normalized to Big Endian, but when loading it into memory everything is reversed to whatever endianness format is appropriate for the current platform. And even if you use Typecast it only will swap elements if the element size on the input side doesn't match the element size on the output. For instance Byte Array (or String, which unfortunately still is just a syntactic sugar to a Byte Array) to something else. Try a Typecast from an (u)int32 to a single precision float. LabVIEW won't swap bytes since the element size on both sides is the same! That even applies to arrays of (u)int32 to array of single precision (or between (u)int64 and double precision floats). Yes it may seem unintuitive when there is swapping or not but it is actually very sane and logical. Indeed, and no there is no problem about Endianness here at al. The only thing you need to make sure is that the array of clusters is pre-allocated to the size needed to copy the elements into and that you have in fact three different size elements here: 1) the size of the uint64 array, lets call it n 2) the size of the cluster array, which must be at least n + (e - 1) / e, with e being the number of u64 elements in the cluster 3) the size of bytes to copy which will be n * 8
  12. You can forget about that comment about endianess. MoveBlock is not endianess aware and operates directly on native memory. Only if you incorporate the LabVIEW Typecast do you have to consider the LabVIEW Big Endian preference. For the Flatten and Unflatten you can nowadays choose what Endianess LabVIEW should use and the same applies for the Binary File IO. TCP used to have an unrealeased FlexTCP interface that worked like the Binary File IO but they never released that, most likely figuring that using the Flatten and Unflatten together with TCP Read and Write does actually the same. PS: A little nitpick here: The size parameter for MoveBlock is defined to be size_t. This is a 32-bit unsigned integer on 32-bit LabVIEW and a 64-bit unsigned integer on 64-bit LabVIEW.
  13. That's hardly efficient as you actually copy the memory buffer at least twice (but most likely three times), likely once in the .Net function you call, then with memcpy() in your C++/CLI wrapper and then again with your GetValueByPointer.xnode. Basically you created a complicated solution to supposedly make something performant, but made it anything but performant. If your C++/CLI DLL instead provides a function where the caller can pass in the pre-allocated array as an actual array (of bytes, integers, doubles, apples or whatever) and request to have the data copied into it, you are already done. Without pointer voodoo on the LabVIEW diagram and at least one memory copy less.
  14. So far it's all guessing. You haven't shown us an example of what you want to do nor the according C# code that would do the same. It depends a lot on how this mysterious array data pointer by reference is actually defined in the .Net method. Is it a full .Net Object, or an IntPtr?
  15. It's definitely a hack. But if it works it works, it may just be a really nasty surprise for anyone having to maintain that code after you move on. It would figure very high on my list of obscure coding. The solution of Shaun is definitely a lot cleaner, without abusing an IMAQ image to achieve your goal. But!!! Is this pointer passed inside a structure (cluster)? If it is directly passed as a function parameter, there really is no reason to try to outsmart LabVIEW. Simply allocate an array of the correct size and pass it as an Array (correct data type), Pass as: Array Data Pointer and you are done. If you want to keep this array in memory to avoid having LabVIEW allocate and deallocate it repeatedly just keep it in a shift register (or feedback node) and loop it through the Call Library Node. The LabVIEW optimizer will then always attempt to reuse that buffer whenever possible (and if you don't branch that wire anywhere out of the VI or to functions that want to modify it, this is ALWAYS).
  16. The thread context switch itself to the UI thread and back again should and won't take that long, it's more in the tenths of microseconds. But that UI thread may be busy doing your front panel drawing or just about anything else that is UI related or needs to run in the only available single threaded protected context in LabVIEW and then the context switch to the UI thread has to arbitrate for it. And that means the LabVIEW code simply sits there and waits until the UI thread finally gets available again and can be acquired by this code clump.
  17. UI Element Property Nodes ALWAYS execute in the UI thread. This applies to VI server nodes operating on front panel and control refnums (and almost certainly on diagram refnums too, but this would be pure scripting so if you do anything time critical here, you are definitely operating in very strange territory). CLFN is Call Library Function Node. These calls can be configured to run either reentrant or in the UI thread. If set to run in UI thread and it is a lengthy function (for instance waiting for some event in the external code) it will consume the UI thread and block it for anybody else including your nice happy property nodes! Now don't run and change all CLFN to run reentrant! If the underlaying DLL is not programmed in a way that it is multithreading safe (and quite some are not) you can end up getting all sorts of weird results from totally wrong computations to outright crashes! So your VI may have worked for many years by chance. But as we all had to learn for those to nice to be true earnings from investments, results from the past are no guarantee for the future! 🙂
  18. No we can't and neither can NI for VIs. But these nodes are built in and the C code behind those nodes can exactly do that and regularly does. Same about custom pop-up menus for nodes. You can't do that for VIs. Pretty much all light yellow nodes (maybe with an exception here or there where a VI fakes to be a node) are built in. There is absolutely no front panel or diagram for these, not even hidden. They directly are implemented as LabVIEW internal objects with a huge C++ dispatch table for pup-up menus, loading, execution, undoing, drawing, etc. etc. The code behind them is directly C/C++. It used to be all pure C code in LabVIEW prior to about 8.0, with hand crafted assembly dispatcher but most probably got eventually all fully objectified. Managing 100ds of objects with dispatch tables that contain up to over 100 methods each by an Excel spreadsheet, with command line tools to generate the interface template code is not something that anyone wants to do out of his or her free will. But it was in the beginnings of LabVIEW the only way to get it working. Experiments to use C++ code at that point made the executable explode to a manifold of the size when using the standard C compiler and make it run considerably slower. The fact it is all compiled as C++ nowadays likely still is responsible for some of the over 50 MB, the current LabVIEW executable weights, but who cares about disk space nowadays as long as the execution speed is not worse than before.
  19. I have to admit that I didn't use them either, yet! And you could be right about that. They definitely need a VI for each method as there is no such thing as a LabVIEW front panel only VI (at least for official non NI developers 🙂 ). I would expect them to be however all at least set to "Must Override" by default, if it is even an option to disable that.
  20. Just to be clearer here. In Java you have actually three types of classes (interfaces, abstract classes and normal classes). Interfaces are like LabVIEW interfaces, they define the method interface but have no other code components. A derived class MUST implement an override for every method defined in an interface. Normal classes are the opposite, all their methods are fully implemented, but might be empty do nothings sometimes if the developer expects them to be overwritten. Abstract classes are classes that are a bit of both. They implement methods but also have method interfaces definitions that a derived class MUST override. If you have a LabVIEW class that has some of its methods designated with the "Must override" checkbox you have in fact the same as what Java abstract classes are, but not quite. In Java, abstract classes can't be instantiated, just as interfaces can't be instantiated, because some (or all) of their methods are simply not present. LabVIEW "abstract" classes are fully instantiable, since there is an actual method implementation for every method, even if it is usually empty for "Must override" methods.
  21. That's usually a dependency error. Shared libraries are often not self contained but reference other shared libraries from other packages and to make matters worse sometimes also minimum versions or even specific versions of them. Usually a package should contain such dependencies and unless you use special command line options to tell the package manager to oppress dependency handling, should attempt to install them automatically. But errors do happen even for package creators and they might have forgotten to include a dependency. Another option might be that you used the root account when installing it, making the shared library effectively only accessible for root. On Linux it is not enough to verify that a file is there, you also need to check its access rights. A LabVIEW executable runs under the local lvuser account on the cRIO. If your file access rights aren't set to include both the read and executable flags for at least the local user group, your LabVIEW application can't load and execute the shared library, no matter that it is there.
  22. Well, my guess is that it is normally a lot safer to compile everything than to trust that the customer did a masscompile before the build. That automatic compile "should" only take time, not somehow stumble over things that for whatever strange reasons don't cause the masscompile to fail. That's at least the theory. That it doesn't work out like that in your case is not a good reason to make it recommended for everyone to do it otherwise.
  23. I would attack it differently. Send that reboot command to the application itself, let it clean everything up and then have it reboot itself or even the entire machine.
  24. Still, VERY high frequency if it is true that you don't continuously try to write to that file.
  25. This functionality is a post LabVIEW 8.0 feature. The original config file VIs originate from way before that. They were redesigned to use queues instead of LVGOOP objects, but things that were supposedly working were not all changed. Also using the "create or replace" open mode on the Open File node has the same effect. Still something else is going on here too. The Config file VIs do properly close the file which is equivalent to flushing the file to disk (safe of some temporary Windows caching). Unless you save this configuration over and over again it would be VERY strange if the small window that such caching could offer for corruption, would always happen at exactly the moment the system power is failing. Something in this applications code is happening that has not yet been reported by the OP.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.