Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,776
  • Joined

  • Last visited

  • Days Won

    243

Everything posted by Rolf Kalbermatter

  1. I have to admit that I didn't use them either, yet! And you could be right about that. They definitely need a VI for each method as there is no such thing as a LabVIEW front panel only VI (at least for official non NI developers 🙂 ). I would expect them to be however all at least set to "Must Override" by default, if it is even an option to disable that.
  2. Just to be clearer here. In Java you have actually three types of classes (interfaces, abstract classes and normal classes). Interfaces are like LabVIEW interfaces, they define the method interface but have no other code components. A derived class MUST implement an override for every method defined in an interface. Normal classes are the opposite, all their methods are fully implemented, but might be empty do nothings sometimes if the developer expects them to be overwritten. Abstract classes are classes that are a bit of both. They implement methods but also have method interfaces definitions that a derived class MUST override. If you have a LabVIEW class that has some of its methods designated with the "Must override" checkbox you have in fact the same as what Java abstract classes are, but not quite. In Java, abstract classes can't be instantiated, just as interfaces can't be instantiated, because some (or all) of their methods are simply not present. LabVIEW "abstract" classes are fully instantiable, since there is an actual method implementation for every method, even if it is usually empty for "Must override" methods.
  3. That's usually a dependency error. Shared libraries are often not self contained but reference other shared libraries from other packages and to make matters worse sometimes also minimum versions or even specific versions of them. Usually a package should contain such dependencies and unless you use special command line options to tell the package manager to oppress dependency handling, should attempt to install them automatically. But errors do happen even for package creators and they might have forgotten to include a dependency. Another option might be that you used the root account when installing it, making the shared library effectively only accessible for root. On Linux it is not enough to verify that a file is there, you also need to check its access rights. A LabVIEW executable runs under the local lvuser account on the cRIO. If your file access rights aren't set to include both the read and executable flags for at least the local user group, your LabVIEW application can't load and execute the shared library, no matter that it is there.
  4. Well, my guess is that it is normally a lot safer to compile everything than to trust that the customer did a masscompile before the build. That automatic compile "should" only take time, not somehow stumble over things that for whatever strange reasons don't cause the masscompile to fail. That's at least the theory. That it doesn't work out like that in your case is not a good reason to make it recommended for everyone to do it otherwise.
  5. I would attack it differently. Send that reboot command to the application itself, let it clean everything up and then have it reboot itself or even the entire machine.
  6. Still, VERY high frequency if it is true that you don't continuously try to write to that file.
  7. This functionality is a post LabVIEW 8.0 feature. The original config file VIs originate from way before that. They were redesigned to use queues instead of LVGOOP objects, but things that were supposedly working were not all changed. Also using the "create or replace" open mode on the Open File node has the same effect. Still something else is going on here too. The Config file VIs do properly close the file which is equivalent to flushing the file to disk (safe of some temporary Windows caching). Unless you save this configuration over and over again it would be VERY strange if the small window that such caching could offer for corruption, would always happen at exactly the moment the system power is failing. Something in this applications code is happening that has not yet been reported by the OP.
  8. IMAQ Vision images are by reference, not by value as other LabVIEW data types. And their identifying attribute is the name used when creating that IMAQ Vision image. Making a variant of the image doesn't change anything about the fact that this image is still a reference to to same memory area that the IMAQ refnum refers to. So if you want to create a buffer of 15s at 60 mages per second you would have to create 900 buffers. That's a lot of memory if you don't use old low resolution cameras.
  9. Your request is VERY open ended. LabVIEW is used by many semiconductor manufacturers for testing their chips, especially in the development departments, often in combination with Test Stand. But how to access those chip pins is very variable. Some projects I worked on were using rather complex PXI setups with all kinds of programmable power supplies, relay multiplexers and digital and analog IO. Others were using mainly programmable SMU (Source Measurement Units) for characterization of semiconductor parts. In some cases it is as "simple" as using a JTAG test probe to access the on board JTAG interface. If you talk about a tester to identify chips, I would go with COTS solutions rather than trying to invent my own. It's simply a to complex topic to try to make your own.
  10. Are you sure the Elemental IO nodes are not actually Hybrid XNodes under the hood? There used to be a Toolkit that you could get after signing an NDA, swearing on your mothers health to never talk about it to anyone and making a secret voodoo dance, about how to create Elemental IO Nodes. It was required for anyone designing their own C modules to be put into a cRIO chassis when wanting to provide an API to access that module. With the current state of green NI, it may be however pretty impossible to get that anymore.
  11. You can either have those purple VIs use an explicit array of objects inputs and outputs on the upper corner. However then they are not a class method but simply VIs. Or you can create a new class that has as one of its private data elements the array of objects. Then instead of just appending the objects to the shift register array you call a method of that class to add the object to its internal object array. Then you can make your purple methods be part of that new object collection class.
  12. My Portuguese is absolutely non-existent but the documentation seems pretty clear. You need to send a binary stream to the device with a specific header and length. Try to play with the display style of the string. First you need a string of four bytes, if you enable Hex Display you can enter here "0102 0401" Then follows the actual string you want to display, filled up to 110 characters by appending 00 bytes (in Hex code) then follows the epilog with the speed indication and what else which you again want to fill in as Hex Display string. Concatenate all together into one long string and send it off. The most difficult part would be to calculate the checksum, everything else is simply putting together the right bytes, either as ASCI characters or hex code. This shows two ways to build an according string to send. "your text string" is the string you want to send. the CONTROL BLOCK part needs to be further constructed by you to control speed etc for the display message Notice the glyphs at the left side of the strings and numeric values indicating the display style, n for normal string display, x for hexadecimal display.
  13. It sounds weird to me. Somehow you seem to open a file refnum in that configuration VI without closing it. How that could happen with the NI Config VIs is vague to me. When you call a VI through the Call Asynchronous node, it is in fact started as its own top level VI. This means that LabVIEW resource cleanup rules apply to it separately. Once a top level VI goes idle (stops executing) LabVIEW will garbage collect any refnums that were opened during its execution in an subVI of it. So those refnums you see being open will go away as soon as that asynchronous call stops executing. But how the standard INI files could cause this is pretty unclear to me. The NI Config Open VI opens the file, parses its contents and stores it in the object wire and then closes the file. The NI Config Close does only open it for writing when it determines that the configuration file contents is modified AND the boolean to save changes is set to true and then closes it again. If you set that boolean to false it never will open the file at all. This makes me believe one of 2 things: 1) you do something else with the file in your configuration VI and don't properly close the refnum 2) you or someone else has modified the NI Config Open vi on your LabVIEW installation and that does not anymore properly close the config file after reading its contents The only other possibility I can think of is a highly corrupted LabVIEW installation or some weird Windows configuration that keeps files open in the background despite the application having requested to close it. But the fact that making the VI asynchronous will "fix" it, would contradict the weird Windows configuration. It's simply LabVIEW again requesting to close that file handle as soon as the LabVIEW file refnum is garbage collected. This is the same happening when your code explicitly closes the file refnum.
  14. My own class hierarchies seldom go beyond 3. I try to keep it flat and with interfaces, which don't quite count as class, it is even easier. Usually I used to have a Base class that was pretty much nothing else than an interface, but before LabVIEW knew interfaces this was the best I could do. I also regularly put some common code in there, so strictly speaking it is an abstract class more than an interface but I digress. A lot of my VI classes are in fact drivers to allow for different devices or hardware interfaces to be treated the same. For these you seldom end up having more than 3 or 4 hierarchy levels, including the base class (interface). And then I use compositing a lot which doesn't really count as inheritance, so I don't count it here. If you would add that in, things would get a lot deeper as there are typically several levels of compositing. In a project I have been helping on the sidelines and which I'm going to be working a lot more in the future, there are actually deeper levels sometimes. Some of them approach 9 levels easily nowadays. The system uses PPLs per class and loads them all dynamically. It works for Windows and NI Linux. Aside from the pretty well known issues about being very strict about never opening more than one project (each class/PPL needs its own project to work like this) and keeping the paths very strictly consistent (with symbolic links to map the current architecture tree of dependent packed libraries Windows/Linux into a common path) this works quite well. It's a pain in the ass if you try to take shortcuts and quickly do something here and there, but if you can keep to the process flow it actually works. As long as you do this only for Windows, things are fairly simply. PPL loading on LabVIEW for Windows is fairly forgiving in many aspects. On NI Linux however things go awry fast and can get very ugly if you are not careful. The entire PPL handling on non-Windows platforms seems to have been only debugged to the point where it started to work, not where it really works reliably even if you don't follow the unwritten rules strictly. For instance we noticed that if you rebuild a higher level PPL/class under Windows, LabVIEW will happily keep loading child classes that depend on this. Don't try to do that on NI Linux! LabVIEW will simply refuse to load the child class as it seems to remember that it depended on a different version of the parent than what is there now and simply bails out during load. So whenever you rebuild a higher level parent class, you need to also rebuild the entire dependency chain. The MGI Solution Explorer is an indispensable tool to keep this manageable.
  15. I know such users too. But I have also come across another kind of users, or rather not come across them often. We write an application and install it and test it and the customer seems happy. Years later they call us because they want to have a new feature or extension and you go there to discuss the feature and take a look at the application and start it up and almost the first thing that happens is that you see an error message when running a quick test. And the resident operator than comes and tells you: "Ahh yes that dialog, you have to first start this other terminal program or whatever and do this and that in there and then you can start this application and all works well." "You mean you are doing this all this time already like this?" "Yes sure, it works, so why bother any further about it?" "Ahh right, ok!" And when you then look at the code you can quickly see where things must go wrong and sometimes even wonder how it ever could have worked fine, but these people experimented and came up with a solution that I would never even have dreamed about to work. If they had called, it would have been a question of a one or two hour fix, but they never called.
  16. It depends on the license. If you purchase a normal single seat license these things are usually separate products that also come with their own serial number. If you have a Software Developer Suite license such as what Alliance Members can purchase, but there are other Developer Suite licenses too, then one single license number can be for pretty much all of NI Software or at least a substantial subset of them. So you really need to know what sort of license the serial number belongs to.
  17. Makes sense. In this case its double unneeded. Since it is a 2D array, the two dimension sizes already add up to 8 bytes, so there would be no padding even for 64-bit integers. And since the array uses 32-bit integer values here, there is anyhow never any padding.
  18. Yes arrays in LabVIEW are one single block of memory where the multiple dimensions are all concatenated together for multi-dimensional arrays. There is no row padding, since the natural size of the elements is also the start address of the actual data area. The data area is prepended with the I32 values indicating the size of each dimension. And yes arrays can have 0 * x * y * z elements, which is in fact an empty array but it still maintains the original lengths for each dimension and therefore also allocated a memory block to store those dimension sizes. Only for empty one dimensional arrays (and strings) does LabVIEW internally allow a NULL pointer handle to be equivalent to an array with a 0 dimension size. If you pass such handles to C code through the Call Library Node you have to be prepared for that if the handle is passed by reference (e.g. LStrHandle *string). Here the string variable can be a valid handle with a length of 0 or greater or it can be a NULL pointer. If your C code doesn't account for that and tries to reference the string variable with LStrBuf(**string) for instance (but you anyhow should use the LStrBufH(*string) instead, which is prepared to not crash on a NULL handle), bad things will happen. For handles passed by value (e.g. LStrHandle string) this doesn't apply since while handles are relocatable in principle, there would be no way for the function to create a new handle and pass it back to LabVIEW, if LabVIEW passed a NULL handle in. In this case LabVIEW will always allocate a handle and set its length to 0, if an empty array is to be passed to the function. I do believe that your explanation about the value to subtract is likely misleading however. The pointer reported in the MemInfo function is likely the actual data pointer to the first element of the array. There is one int32 for each dimension located before that before you get to the actual pointer value contained within the handle. And that value is what DSRecoverHandle() needs. The way it works is that the pointer area of the memory block referred to by a handle actually contains extra bytes in front of the start address of the handle pointer. This area stores information such as the actual handle that refers to this handle pointer, the totally allocated storage in bytes for that handle (minus this extra management information and some area for flags that was used when LabVIEW still had two distinct handle types (AZ and DS). AZ handles could be dynamically resized by the memory manager whenever it felt like, unless there was a flag that indicated that the handle was locked. To set and clear this flag there was the AZLock() and AZUnlock() function. Trying to access an AZ handle without locking it could bomb your computer, the Macintosh equivalent of Blue screens back in those days. You got a dialog with a number of bombs, that indicated the type of 68k exception that had been triggered. And yes after acknowledgment of that dialog, the application in question was gone. DS handles never are relocated by the memory manager itself. The application needs to do an explicit DSSetHandleSize() or DSDisposeHandle() for a particular handle to change. However you should not try to rely on this information, the location of where LabVIEW stores the handle value and handle size (and if it even does so) is platform, compiler and version dependent. And since it is private code deep down in the memory manager that is fine. The entire remainder of LabVIEW does not care and is not aware about this. The only people who can do anything useful with that information are LabVIEW developers who actually might need to debug memory manager things. For all the rest including LabVIEW users this is utterly useless. So how much you would need to subtract from that pointer would almost certainly depend on the number of dimensions of your array and not the bitness you operate in. It's 4 bytes per dimension, BUT! There is a little gotcha, On other platforms than Windows 32-bit, the first data element in the array is naturally aligned. So if your array is an array of 64-bit integers or double precision floats, the actual difference to the real start of the handle needs to be a multiple of 8 bytes on non-Windows 32-bit (and Pharlap) platforms, since that is the size of the array data element.
  19. The first post does. 😀 I remember CICS and MVS. The mainframes in the company I did my vocational education as communication electronics technician was running on this and the entire inventory, order and production automation was running on this. The terminals were mostly green phosphor displays, 80 * 25 character. I did some CICS work there, but not Cobol programming. I did however do some Tektronix VAX VMS Pascal programming on the VAX systems they also used in the engineering departments to run simulation, embedded programming and CAD on.
  20. You are likely confused about EtherCAT Master and EtherCAT Slave devices. You only can use an NO-9144 or NI-9145 as EtherCAT slave. A device needs to have very specific hardware support in order to be able to support EtherCAT slave funcitonality. Part of it is technical and part of it is legal, as you need to pay license costs for every EtherCAT slave device to the EtherCAT consortium. Your NI-9057 can be used as EtherCAT master using the Industrial Communication for EtherCAT driver software but is otherwise simply a normal cRIO device. The EtherCAT master functionality has to be specifically programmed by you using the Industrial Communication for EtherCAT functionality. You may want to check out this Knowledge Base article. The order of steps is quite important. Without first configuring one of the Ethernet ports as EtherCAT, the autodetection function won't be able to show you the EtherCAT option.
  21. The ini key enables the UI options to actually set these things. The configuration for those things is generally some flag or other thing that is stored with the VI. So yes it will stick. Except of course if you do a save for previous. If the earlier version you save to did not know that setting, it is most likely simply lost during the save for previous.
  22. Any chance that it is operating in bipolar mode? Then the MSB would be the sign bit!
  23. No, byte swapping happens in both cases. The code with and without ByteArrayToString is functionally equivalent. This is an oversight in the optimization of the Tyecast node, where it takes some shortcut in the case of the string input, but doesn't apply that shortcut for the byte array too, which in essence is the same as a LabVIEW string so far (but shouldn't be for many many years already). The BytArrayToString is in terms of runtime performance pretty much a NOP since the two are technically exactly the same in memory. But it enables a special shortcut in the Typecast function that handles string inputs differently than other datatypes.
  24. The comparison is however not entirely fair. MoveBlock does simply a memory move, Typecast does a Byte and Word Swap for every value in the array, so is doing considerably more work. That is also why Shaun had to add the extra block in the initialization to use another MoveBlock for the generation of the byte array to use in the MoveBlock call. If it would use the same initialized buffer the resulting values would look very weird (basuically all +- Inf). But you can't simulate the Typecast by adding Swap Bytes and Swap Words to the double array. Those Swap primitives only work on integer values and for single precision and doubles it simply is a NOP. I would consider it almost a bug that Typecast does swapping for single and double precision values but the Swap Bytes and Swap Words do not. It doesn't seem entirely logical.
  25. Typecast does a few things to make sure the input buffer is properly sized for the desired output type. For instance in your byte array to double array situation, if your input is not a multiple of 8 bytes, it can't just reuse the input buffer in place (It might never do that but I'm not sure. I would expect that it does if that array isn't used anywhere else by a function that wants to modify/stomp it). But if it does it has to resize the buffer and also adjust the array size in any case. If it doesn't it would be anyhow a dog slow operation 😃. Extra complication with Typecast is that it always does Big Endian normalization. This means that it will go on every still shipping LabVIEW platform and byte swap every element in the array appropriately. This may be desired but if it isn't, fixing it by adding a Swap Bytes and Swap Words function in the resulting array has actually several problems: 1) It costs extra performance for swapping the bytes in Typecast and then again for swapping it back. A simple memcpy() would be much more performant for sure even if it requires a memory allocation for the target buffer. 2) If LabVIEW ever gets a Big Endian platform again (we can dream, can we) your code will potentially do the wrong thing depending on who created the original byte array in the first place.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.