Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. I'm aware of the limitation with this function. It not only wants to run in the UI thread but also doesn't allow to specify the context it should be running under, so will likely always use the main application context (which could be fine really, as each project has its own context). And that with the missing context is logical since this function existed before LabVIEW 8.0 which introduced (at least publically) application contexts. I think the mentioned C interface has to do with the function GetCInterfaceFunctionTable(), exported since around LabVIEW 8.0, but that is a big black hole. From what I could see, it returns a pointer to a function pointer table containing all kinds of functions. Some of them used to be exported as external functions in LabVIEW too. But without a header file declaring this structure and the actual functions it exports, it is totally hopeless to think one could use it. As most of the functions aren't really exported from LabVIEW in other ways they also didnt retain the function name, like all those other functions in the official export table. But even with a name it would be very tedious to find out what parameters a function takes and how to call it especially if it needs to be called in conjunction with other functions in there.
  2. Sorry I'm not a curl expert. So you could you explain a bit what all these parameters mean and which of them causes problems?
  3. I think CallInstrument() is more promising although the documenttion I found seems to indicate that it is an old function that is superseded by something called C Interface in LabVIEW. But I haven't found any information about that new interface. /* Legacy C function access to call a VI. Newer code should consider upgrading to use the C Interface to LabVIEW. */ /* Flags to influence window behavior when calling a VI synchronously via CallInstrument* functions. The following flags offer a refinement to how the former 'modal' input to CallInstrument* functions works. For compatibility, a value of TRUE still maps to a standard modal VI. Injecting the kCI_AppModalWindow flag will allow the VI to stack above any Dlg*-based (C-based LV) windows that may be open as well. Use kCI_AppModalWindow with caution! Dlg*-based dialogs run at root loop, and VIs that run as app modal windows might be subject to deadlocking the UI. */ const int32 kCI_DefaultWindow = 0L; ///< in CallInstrument*, display VI's window using VI's default window styles const int32 kCI_ModalWindow = 1L<<0; ///< in CallInstrument*, display VI's window as modal const int32 kCI_AppModalWindow = 1L<<1; ///< in CallInstrument*, display VI's window as 'application modal' /* Legacy C function access to call a VI. Newer code should consider upgrading to use the C Interface to LabVIEW. */ /* @param viPath fully qualified path to the VI @param windowFlags flags that influence how the VIs window will be shown @param nInputs number of input parameters to send to the VI @param nOutputs number of output parameters to read from the VI @return error code describing whether the VI call succeeded The actual parameters follow nOutputs and are specified by a combination of parameter Name (PStr), type (int16*) and data pointer. @example CallInstrument(vi, kCI_ModalWindow, 2, 1, "\07Param 1", int16_Type_descriptor_in1, &p1, "\07Param 2", int16_Type_descriptor_in2, &p2, "\06Result", int16_Type_descriptor_res, &res); @note This function does not allow the caller to specify a LabVIEW context in which to load and run the VI. Use a newer API (such as the C Interface to LV) to do so. @note Valid values for windowFlags are: kCI_DefaultWindow (0L) kCI_ModalWindow (1L<<0) kCI_AppModalWindow (1L<<1) */ TH_SINGLE_UI EXTERNC MgErr _FUNCC CallInstrument(Path viPath, int32 windowFlags, int32 nInputs, int32 nOutputs, ...);
  4. They might actually be simply left overs from the LabVIEW developers before the Call Library Node gained the Minimum Size control for string and array parameters in LabVIEW 8.2. The old dialog before that did not have any option for this, so they might just have hacked it into the right click menu for testing purposes as that did not require a configuration dialog redesign, which might have been the main reason that this feature wasn't released in 8.0 (or maybe earlier already). There are many ini file settings that are basically just enablers for some obscure method to control a feature as it is still under development and once that feature is released the ini key does nothing or enable an option somewhere in the UI that is pretty much pointless now.
  5. A bit of a wild guess but there is a function MgErr FNewRefNum(Path path, File fd, LVRefNum *refNumPtr) exported by the LabVIEW kernel which takes a File, a Path (which could be an empty path as the File IO functions don't really use it themselves) and returns a file refnum that you can then use with the standard File IO functions. Now File is a LabVIEW private datatype but under Windows it is really simply a HANDLE and under Linux and MacOSX 64-bit it is a FILE*. So if you can manage to map your stdio fd somehow to a FILE* using some libc functions FILE *file = fdopen(fd, "w"); you might just be lucky enough to turn your stdio file descriptor into a LabVIEW refnum that the normal LabVIEW Read File and Write File nodes can use. Also note that libc exports actually stdin, stdout and stderr as predefined FILE* handles for the specifc standard IO file descriptors so you may not even have to do the fdopen() call above. After you are done with it you should most likely not just call LabVIEW's Close File on the refnum as it assumes that the file descriptor is a real FILE* and simply calls fclose() on it. Maybe that is ok depending how you mapped the file descriptor to the FILE* but otherwise just use FDisposeRefNum(LVRfNum refnum) on the refnum and do whatever you need to do to undo the file desriptor mapping.
  6. No not really. I mean something quite different. Given a VI create a sort of function wrapper around it that works as a C function pointer. For that we would need something like MgErr CallVIFunc(VIDSRef viRef, int32 numInParams, VIParams *inParams, int32 numOutParams, VIParams *outParams); with both parameters something like an array of typedef struct { LStrHandle controlName; int16 *typedesc; void *data; } VIParams; That way one could do a C function wrapper in assembly code that then converts its C parameters into LabVIEW parameters and then calls the VI as function. These are not actual functions that exist but just something I came up with. I'm sure something similar actually exists!
  7. That's a bit ambitious! 😀 I would rather think something in the sense of the Python ctypes package to allow arbitrary function calls to DLLs including callbacks and such. We just need to find a method that does the opposite for this: calling a VI as C function pointer. 😀
  8. Are you using LabVIEW 7.1????? If you use a newer version this should not fix any problem as this specific problem was actually fixed in LabVIEW 7.1.1. The problem was that the LabVIEW enumeration of directory entries assumed that the first two returned entries were the . and .. entries. Since LabVIEW 7.1.1 the Linux version doesn't have that assumption (the Windows version still had at least until recently and that can cause different problems when accessing a Samba mounted directory).
  9. Well resources are really a MacOS Classic thing. LabVIEW simply inherited them and implemented their own resource manager so they could use it on non-Macintosh systems too. So that explains why the ResTemplate page seems to nicely describe the LabVIEW resources. It doesn't, it only describes the Macintosh resources and many of them are used in LabVIEW too. As to the resource templates, I looked at them way back in LabVIEW 5 or 6 and my conclusion was that most of them described the same types that a Macintosh resource file would describe too, but NI mostly left out the LabVIEW specific types. And it's not surprising, nobody was ever looking at them, so why bother? 😀
  10. Trying to get at the data pointer of control objects, while maybe possible wouldn't be very helpful since the actual layout, very much like for the VI dataspace pointer has and will change with LabVIEW versions very frequently. Nothing in LabVIEW is supposed to interface to these data spaces directly other than the actual LabVIEW runtime and therefore there never has been nor is nowadays any attempt to keep those data structures consistent across versions. If it suits the actual implementation, the structures can be simply reordered and all the code that interfaces to external entities including saving and loading those heaps translates automatically to and from standardized (that includes changing multibyte data elements to Big Endian format) data. The MagicCookieJars used to be simply global variables but got moved into the Application Context data space with LabVIEW 8.0. I'm not aware of any function to access those CookieJar pointers. They did not exist prior to LabVIEW 8 as any code referencing a specific CookieJar was accessing them directly by its global address and I suppose there isn't any public interface to access any of them since the only code supposedly accessing them sits inside the LabVIEW runtime. The only external functions accessing such refnums either use well known, undocumented manager APIs to access objects (IMAQ Vision control) or use UserData refnums based on the object manager (has nothing to do with LabVIEW classes but rather with refnums) that reference their cookie-jar indirectly through the object class name. MCGetCookieInfo() requires a cookie jar, the actual refnum and returns the associated data space for that refnum. What that data space means can be very different for different refnums. For some it's simply a pointer to a more complex data structure that is allocated and deallocated by whatever code implements the actual refnum related functionality. For others it is the data structure itself. What it means is defined when creating the cookie jar, as the actual function to do so takes a parameter that specifies how many bytes each refnum needs for its own data storage. For interfaces managing their own data structures it simply uses sizeof(MyDataStructPtr) or more generally sizeof(void*) for this parameter, for interfaces that use the MagicCookie store for their entire refnum related data structure, they rather use sizeof(MyDataStruct) here. These interfaces all assume that the code that creates the CookieJar and uses those refnums is all the same code and there is no general need to let other code peek into this, so there is no public way to access the CookieJar. In fact if you write your own library managing your own refnums, you would need to store that cookie jar somewhere in your own code. That is unless you use object manager refnums. In that case things get even more complicated.
  11. Well they do have (ar at least had) an Evaluation version but that is a specially compiled version with watermark and/or limited functionality. The license manager included in the executable is only a small part of the work. The Windows version uses the FlexLM license manager but the important thing is the server binding to their license server(s). Just hacking a small license manager into the executable that does some verification is not that a big part. Tieing it into the existing license server infrastructure however is a major effort. And setting up a different license server infrastructure is most likely even more work. That is where the main effort is located. I have a license manager of my own that I have included in a compiled library (shared library part not the LabVIEW interface itself) and while it was some work to develop and making it work on all LabVIEW platforms, that pales in comparison to what would be needed to make an online license server and adding a real e-commerce interface to it would be even more work.
  12. LabVIEW on non-Windows platforms has no license manager built in. This means that if you could download the full installer just like that, there would be no way for NI to enforce anyone to have a valid license when using it. So only the patches are downloadable without a valid SSP subscription, since they are only incremental installers that add to an existing full install, usually replacing some files. That's supposedly also the main reason holding back a release of the Community editions on non-Windows platforms. I made LabVIEW run on Wine way back with LabVIEW 5.0 or so, also providing some patches to the Wine project along the lines. It was a rough ride and far from perfect even with the Wine patches applied but it sort of worked. Current Wine is a lot better but so are the requirements of current LabVIEW in terms of the Win32 API that it exercises. That NI package manager won't work is not surprising, it is most likely HEAVILY relaying on .Net functionality and definitely not developed towards the .Net Core specification but rather the full .Net release. I doubt you can get it to work with .Net prior to at least 4.6.2.
  13. Generaly speaking this is fine for configuration or even data files that the installer puts there for reading at runtime. However you should not do that for configuration or data files that your application intends to write to at runtime. If you install your application in the default location (<Program Files>\<your application directory>) you do not have write access to this folder by default since Windows considers it a very bad thing for anyone but installers to write in that location. When an application tries to write there, Windows will redirect it to a user specific shadow copy, so when you then go and check in the folder in explorer you may wonder why you only see the old data from before the write. This is since on reading in File Explorer, Windows will create a view of the folder with the original files in the folder if they exist and showing the shadow copy files version only for those files that didn't exist to begin with. Also the shadow copy is stored in the user specific profile so if you login with a different user your application will suddenly see the old settings. Your writeable files are supposed to either be in a subdirectory of the current users or the common Documents folder (if a user is supposed to access those files in other ways, such as data files generated by your application), or in a subdirectory inside the current user or common <AppSettings> directory (for configuration files that you rather do not want your user to tamper with by accident). They are still accessible but kind of invisible in the by default invisible <AppSettings> directory. The difference between current user and common location needs to be taken into account depending if the data written to the files is meant to be accessible only to the current user or to any user on that computer.
  14. Actually, I'm using the System Configuration API instead. Aside from the nisysconfig.lvlib:Initialize Session.vi and nisysconfig.lvlib:Create Filter.vi and nisysconfig.lvlib:Find Hardware.vi everything is directly accessed using property nodes from the SysConfig API shared library driver and there is very little on the LabVIEW level that can be wrongly linked to.
  15. I haven't benchmarked the FIFO transfer in respect to element size, but I know for a fact that the current FIFO DMA implementation from NI-RIO does pack data to 64-bit data boundaries. This made me change the previous implementation in my project from transfering 12:12 bit FXP signed integer data to 16-bit signed integers since 4 12-bit samples are internally transferd as 64-bit anyhow over DMA, just as 4 16-bit samples are. (In fact I'm currently always packing two 16 bit integers into a 32-bit unsigned integer for the purpose of the FIFO transfer, not because of performance but because of the implementation in the FPGA which makes it faster to always grab two 16-bit memory locations at once and push them into the FIFO. Otherwise the memory read loop would take double as much time (or require a higher loop speed) to be able to catch up with the data acquisition. 64 12-bit ADC samples at 75 kHz add up to quite some data that needs to be pushed into the FIFO. I might consider to push this up to 64-bit FIFO elements just to see if it makes a performance difference, but the main problem I have is not the FIFO but rather to get the data pushed onto the TCP/IP network in the RT application. Calling directly libc:send() to push the data into the network socket stack rather than through TCP Write seems to have more effect.
  16. Ahhh, I see, blocking when you request more data than there is currently available. Well I would in fact not expect the Acquire Read Region to perform much differently in that aspect. I solved this in my last project a little differently though. Rather than calling FIFO Read with 0 samples to read, I used the <remaining samples> from the previous loop iteration to calculate an estimation for the amount of samples to read similar to this formula (<previous remaining samples> + (<current sample rate> * <measured loop interval>) to determine the number of samples to request. Works flawlessly, saves a call to Read FIFO with 0 samples to read (which I do not expect to take any measurable execution time, but still). I need to do this since the sampling rate is in fact externaly determined through a quadrature encoder so can dynamically change in a pretty large range. But unless you can do all data intense work inside the IPE as in the example you show, the Acquire FIFO Read Region offers no advantage in terms of execution speed to a normal FIFO Read.
  17. Actually there is another option to avoid the typecast copy. Because Typecast is in fact not just a type reinterpretation like in C but also does byte swapping on all Little Endian platforms which are currently all but the old VxWorks based cRIO platforms, since the use a PowerPC CPU which by default operates in Big Endian (this CPU can support both Endian modes but is typically always used in Big Endian mode. If all you need is a byte array then use the String to Byte Array node instead. This is more like a C Typecast as the data type in at least Classic LabVIEW doesn't change at all (somewhat sloppily stated: a string is simply a byte array with a different wire color 😀). If you need a typecast sort of thing because your numeric array is something else than a byte array, but don't want endianizing you could with a bit of low level byte shuffling (preferably in C but with enough persistence it could even be done in LabVIEW diagram although not 100% safe) you could write a small function that swaps out two handles with additional correction of the numElm value in the array and do this as a virtually zero cost operation. I'm not sure the Acquire Write Region would save you as much as you hope for this. The DVR returned still needs to copy your LabVIEW data array into the DMA buffer and there is also some overhead from protecting the DVR access from the DMA routine which will attempt to read the data. Getting rid of the inherent copy in the Typecast function is probably more performant. Why would the Read FIFO method block with high CPU usage? I'm not sure what you refer to here. Sure it needs to allocated an array of the requested size and then copy the data from the DMA buffer into this array and that takes of course CPU but if you don't require more data than there is currently in the DMA buffer it does not "block", it simply has to do some considerable work. Depending on what you are then doing with the data you do not save anything by using the Acquire Region variant. This variant is only useful if you can do all of the operation on the data inside the IPE in which you access the actual data. If you only do use the IPE to read the data and then pass it outside of the IPE as normal LabVIEW array there is absolutely nothing to be gained by using the Acquire Read Region variant. In the case of the Read FIFO, the array is generated (and copied into) in the Read FIFO node, in the Acquire Read Region version it is generated (and copied into) as soon as the wire crosses the IPE border. It's pretty much the same effort and there is really nothing LabVIEW could do to avoid that. The DVR data is only inside the IPE accessible without creating a full data copy. I did recently a project where I used a Acquire Read Region but found that it had no real advantage to the normal FIFO Read, since all I did with the data was in fact to pass it on to a TCP Read. As soon as the data needs to be send to TCP Read, the data buffer has to be allocated anyhow as a real LabVIEW handle and then it doesn't really matter if that happens inside the FIFO Read, or inside the IPE accessing the DVR from the FIFO Region. My loop timing was anyhow heavily dominated by the TCP Write. As long as I only read the data from the FIFO, my loop could run consistently at 10.7MB/s with a steady 50ms interval with very little jitter. As soon as I added the TCP Write the loop timing jumped to 150 ms an steadily increased until the FIFO was overflowing. My tests showed that I could go up to 8MB/s with a loop interval timing of around 150 ms +- 50ms jitter without the loop starting to run off. This was also caused by the fact that the ethernet port was really only operating at 100Mb/s due to the switch I was connected to not supporting 1Gb/s. The maximum theoretical throughput at 100Mb/s is only 12.5MB/s and the realistic throughput is usually at around 60% of that. But even with a 1Gb/s switch the overhead of TCP Write was dominating the loop by far, making other differences including the use of an optimized Typecast without any Endian normalization compared to the normal LabVIEW Typecast which did Endian normalization fall into unmeasurable noise. And it's nitpicking really and likely only costs a few ns execution time extra but the calculation of the number of scans inside the loop to resize the array to a number of scans and number of cannels should be all done in integer space anyhow and using the Quotient & Reminder. Not to much use in using Double Precision values for all these for something that inherently should be integer numbers anyhow. There is even a potential for a wrong number of scans in the 2D array since the ToI32 conversion number does standard rounding, so could end up one more than there are full scans in the read data.
  18. The application builder uses internally basically a similar method to retrieve the icons as an image data cluster to then save to the exe file resources. So whatever corruption happens in the build executable is likely the root cause for both the yellow graph image and the yellow icon. And it's likely dependent on something like the graphics card or its driver too or something similar as otherwise it would have been long ago found and fixed (if it happened on all machines).
  19. You can sign up and do it too 😀
  20. I remember some similar issues in the past on plain Windows. Not exactly the same but it seemed to have to do with the fact that LabVIEW somehow didn't get the Window_Enter and Window_Leave events from Windows anymore or at least not properly. And it wasn't just LabVIEW itself but some other applications started to behave strange too. Restarting Windows usually fixed it. And at some point it went away just as it came, most likely a Windows update or something. So I think it is probably something about your VM environments cursor handling that gets Windows to behave in a way that doesn't sit well with LabVIEW.
  21. Because your logic is not robust. First you seem to use termination character but your binary data can contain termination character bytes too and then your loop aborts prematurely. Your loop currently aborts when error == True OR erroCode != <bytes received is bytes requested>. Most likely your VISA timeout is a bit critical too? Also note that crossrulz reads 2 bytes for the <CR><LF> termination character while your calculation only accounts for one. So if what you write is true, there still would be a <LF> character in the input queue that will show up at the next VISA Read that you perform. As to PNG containing checksum: Yes it does, multiple even! Each data chunk in the image data is compressed with the zlib deflate algorithme and this contains a CRC32 checksum!
  22. I don't think that would work well for Github and similar applications. FINALE seems to work (haven't tried it yet) by running a LabVIEW VI (in LabVIEW obviously) to generate HTML documentation from a VI or VI hierarchy. This is then massaged into a format that their special WebApp can handle to view the different elements. Github certainly isn't going to install LabVIEW on their servers in order to run this tool. They might (very unlikely) use Python code created by Mefistoles and published here to try to scan project repositories to see if they contain LabVIEW files. But why would they bother with that? LabVIEW is not like it is going to be the next big hype in programming ever. What happened with Python is pretty unique and definitely not going to happen for LabVIEW. There never was a chance for that and NI's stance to not seek international standardization didn't help that, but I doubt they could have garnered enough wide spread support for this even if they had seriously wanted to. Not even if they had gotten HP to join the bandwagon, which would have been as likely as the devil going to church 😀.
  23. It used to be present in the first 50 on the TIOBE index, but as far as I remember the highest postion was somewhere in the mid 30ies. The post you quoted states that it was at 37 in 2016. Of course reading back my comment I can see why you might have been confused. There was a "n" missing. 😀 Github LabVIEW projects are few and far in between. Also I'm not sure if Github actively monitors for LabVIEW projects and based on what criteria (file endings, mention of LabVIEW in description, something else?).
  24. Interesting theory. Except that I don't think LabVIEW never got below 35 in that list! 😀
  25. Wasn't aware that LabVIEW still uses SmartHeap. I knew it did in LabVIEW 2.5 up to around 6.0 or so but thought you guys had dropped that when the memory managers of the underlaying OS got smart enough themselves to not make a complete mess of things (that specifically applied to the Win3.1 memory management and to a lesser extend to the MacOS Classic one).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.