Jump to content

Rolf Kalbermatter

Members
  • Content Count

    3,037
  • Joined

  • Last visited

  • Days Won

    151

Posts posted by Rolf Kalbermatter

  1. I have a fair amount of experience with SVN and a smaller amount with GIT. At our company we still use SVN, not because it is super perfect but because it simply works. I have not managed to get myself into a real mess with SVN. The worst that can happen in my experience is that some operation doesn't terminate properly and you have to do a manual cleanup to be able to continue.

    Enter GIT and that changes dramatically. It's very easy to do just about anything in GIT, including mistakes. And I can not count the number of times in which I spend several hours of unraveling some mess I caused by selecting the wrong GIT action for some reason. The fact that the naming of the different GIT actions is sometimes rather ambiguous and the parameters can be mind boogling complex doesn't hel that either.

    The few times I did something wrong in TortoiseSVN I simply went into the command line and entered a few simply enough commands through svn.exe and all was well. TortoiseGIT is very easy to do things wrong and the GIT command line, .... well it feels like having to do a one year study to even understand the basics of it.¬†ūüėÄ

  2. 1 hour ago, Beuf said:

    Still valid in LV2020 on brand new HP workstation, tip of viSci works for me. Just placed a caption '.' over the control where it isn't visible and flickering is gone!

    That's because if you do any overlay of anything in LabVIEW, LabVIEW will be forced to fully redraw the entire control every single time rather than trying to use optimized region update APIs in Windows. And the graphics driver for your video card seems to have some trouble with proper region drawing.

    Of course full redraw will be usually considerably slower so it isn't a perfect solution at all.

  3. 12 minutes ago, ShaunR said:

    TLS is quite burdensome for constrained devices, specially if you have to put a webserver on the device to upload rather than using OTA libraries. 

    How would you do HTTPS without TLS?

    And it depends about use of LabVIEW. For a general in the field IoT application I wholeheartedly agree. Trying to build such a system in LabVIEW is going to be reinventing the wheel using a high end CAD tool while you can take ready made wheels from the shelf.

    If it is however part of final step during inline testing of a product, with the whole test system controlled by LabVIEW, it may be useful, although calling a Python script would still most likely be much easier and maybe a few milliseconds slower than fully integrated in LabVIEW.

    But then the specification sounds a little bit bogus. Rather than requiring that the firmware needs to be written securely to the device, it should simply state the protocol that the device can support. Security in a (hopefully) closed in house network really shouldn't be a concern, otherwise you have a lot more trouble to be concerned about than if the firmware is written securely to the device.

  4. On 10/14/2020 at 3:23 PM, FixedWire said:

    Client: "And we need you to securely write the firmware to the DUT using X.509"

    I'm trying to get a better understanding on how to use X.509 certificates and deciding if the new TLS functionality is the best approach. The examples for using TLS are a bit sparse...

    If someone could share their battle scars or provide suggestions it'd be appreciated.

    btw, thank you Neil Pate for posting your code!

    So is your link to the DUT over public internet or an internal network? If the first, the client may want to reconsider if that is the way to do firmware updates, if the second, someone seems to be utterly paranoid.

    I don't think it is to useful to use the TLS functionality for this. This is on TCP/IP level an are you seriously doing firmware updates over TCP/IP? Or are you rather using HTTPS instead which could have been done with the HTTP(S) client functionality since about LabVIEW 2014 already.

    If you need more specific control you might have to go with something like the Encryption Compendium instead. https://lvs-tools.co.uk/software/encryption-compendium-labview-library/

  5. 4 hours ago, drjdpowell said:

    That doesn't make much sense.  Those DLLs aren't created by me; they are the standard PostgreSQL DLLs that I downloaded.  They can't have any dependencies on any LabVIEW stuff.

    It could make sense if the PostgreSQL DLLs were compiled with Microsoft Studio 2010 or 2012 or similar (not sure which Visual Studio version is used for compilation of LabVIEW 2015) and set to use dynamic linked MS C Runtime library. It is old enough to not be standard on a recent Windows 10 installation and not new enough to not be tightly coupled with a specific Microsoft Visual C runtime version. Since about Microsoft Studio 2015, the Visual C runtime has stayed at version 14.x and doesnt with each new version require a new runtime. It's still possible that a newer Visual Studio application won't work with an older runtime but the opposite works usually without a glitch.

    • Thanks 1
  6. 2 hours ago, dadreamer said:

    There are InitLVClient / InitLVClient2, UninitLVClient / UninitLVClientNoDelay, WaitLVClientReady, WaitLVShuttingDown and a whole bunch of unnamed functions (safety precautions?). It'll take a serious effort to study how those work.

    Not really safety precautions. Most C(++) compilers will strip by default all symbols from linked non-debug code, unless these symbols are needed for certain puposes like function export tables. While these functions also return a pointer to some sort of export table, they are not official export tables, just as virtual tables in C++ aren't really export tables. The name is unneeded as far as the compiler is concerned, so they get all stripped. This has certain anti reverse engineering reasons as someone distributing a release version isn't usually interested in letting its users reverse engineer their software¬† (just check your license agreement you entered into when installing LabVIEW¬†ūüėÄ) but the main reason is really that these symbols simply blow up the executable image size for no¬†useful¬†reason and it's an easy thing to do by the linker. The functions with retained symbol¬†names in there are usually functions that are also exported in the official export table.

    GetCIntVIServerFuncs() existed before LabVIEW 8.0 and was/is mostly related to functions needed by various VI Server interfaces. The first version of VI server was a very small set of exported functions that got called by a CIN. This was then changed into the fully diagram accessible VI server interface as we know it now around LabVIEW 5. Sometimes the LabVIEW compiler needs to create callbacks into the LabVIEW kernel. Initially this was done through direct calls of exported functions but that was changed for several reasons to call a special internal export interface. Hiding things was likely more a byproduct than the main reason for this, making this interface more uniform among the different platforms was likely more important.

    The C Interface supposedly took this idea further and here hiding LabVIEW internas might have been part of the decision. But because this privately exported function table is highly inflexible and can only be amended to in subsequent LabVIEW versions but never modified without creating serious trouble with version compatiblity, I think it's not something they would want to use for everything. The advantage is that you do not really need to use dynamic link functionality for this except for the one function to get at this interface, so there is one simple function that will use platform specific dynamic loading and everything else is simply a function pointer call into a table at a specific offset.

    A COM like interface would be much more flexible in terms of version compatibility and NI uses that in some parts even across platforms despite that real COM isn't supported on non-Windows platforms, but it is also a lot more complicated to create and use even when you program everything in C++.

  7. 55 minutes ago, dadreamer said:

    Nice catch, Rolf! It works and I am able to pass the input/output parameters now.

    2020-10-05_13-00-41.jpg.588b4620274cd690c90b033cd23dabb2.jpg

    RunVI.vi 10.95 kB · 0 downloads

    SubVI.vi 12.43 kB · 0 downloads

    But it appears that CallInstrument wants to be ran in UI Thread only, else LabVIEW goes big crash. It calls the VI synchronously, waiting until it finishes. That makes me think that this function is not the best idea to use as a callback, because when the user will be interacting with some GUI elements or the program will be running some property/invoke nodes, the callback VI will be waiting UI Thread to become idle, thus we could experience the delay between the events from our callback (or even loss of earlier ones?). It would be much better to run the VI in any thread somehow, but CallInstrument doesn't support that. That's why I decided not to adapt the asm samples for that function for now. Maybe I would be lucky enough to find some other options or overcome the threading issues somehow. Or end on PostLVUserEvent until some better ideas come to mind. ūüôā

    I'm aware of the limitation with this function. It not only wants to run in the UI thread but also doesn't allow to specify the context it should be running under, so will likely always use the main application context (which could be fine really, as each project has its own context). And that with the missing context is logical since this function existed before LabVIEW 8.0 which introduced (at least publically) application contexts.

    I think the mentioned C interface has to do with the function GetCInterfaceFunctionTable(), exported since around LabVIEW 8.0, but that is a big black hole. From what I could see, it returns a pointer to a function pointer table containing all kinds of functions. Some of them used to be exported as external functions in LabVIEW too. But without a header file declaring this structure and the actual functions it exports, it is totally hopeless to think one could use it. As most of the functions aren't really exported from LabVIEW in other ways they also didnt retain the function name, like all those other functions in the official export table. But even with a name it would be very tedious to find out what parameters a function takes and how to call it especially if it needs to be called in conjunction with other functions in there.

  8. On 10/2/2020 at 8:28 PM, Ansible said:

    I think I have nailed converting every other curl command to LV native.  This one has me stumped and it should be real easy. 

    cul --location -H "x-file-format: <FILE>" -H "x-campaign-name: TEST_CAMPAIGN" -H "x-session-name: TEST_SESSION" -H "x-description: DESCRIPTION" -H "x-tags: cc13,test" -H "x-notify-email: user@company.com" --form file=@/data/file.file http://URL

    Also this would help on a little script that I am hoping to release at some point. Given the number of curl examples on the internet made a parser that will generate a vi from a curl example. 

     

    Sorry I'm not a curl expert. So you could you explain a bit what all these parameters mean and which of them causes problems?

  9. I think CallInstrument() is more promising although the documenttion I found seems to indicate that it is an old function that is superseded by something called C Interface in LabVIEW. But I haven't found any information about that new interface.

    /* Legacy C function access to call a VI. Newer code should consider upgrading to use the C Interface to LabVIEW. */
    
    /*  Flags to influence window behavior when calling a VI synchronously via CallInstrument* functions.
    
    	The following flags offer a refinement to how the former 'modal' input to
    	CallInstrument* functions works. For compatibility, a value of TRUE still
    	maps to a standard modal VI. Injecting the kCI_AppModalWindow flag will allow
    	the VI to stack above any Dlg*-based (C-based LV) windows that may be open as well.
    	Use kCI_AppModalWindow with caution! Dlg*-based dialogs run at root loop, and VIs
    	that run as app modal windows might be subject to deadlocking the UI. */
    const int32 kCI_DefaultWindow	= 0L;		///< in CallInstrument*, display VI's window using VI's default window styles
    const int32 kCI_ModalWindow		= 1L<<0;	///< in CallInstrument*, display VI's window as modal
    const int32 kCI_AppModalWindow	= 1L<<1;	///< in CallInstrument*, display VI's window as 'application modal'
    
    /* Legacy C function access to call a VI. Newer code should consider upgrading to use the C Interface to LabVIEW. */
    /*
    	@param viPath fully qualified path to the VI
    	@param windowFlags flags that influence how the VIs window will be shown
    	@param nInputs number of input parameters to send to the VI
    	@param nOutputs number of output parameters to read from the VI
    	@return error code describing whether the VI call succeeded
    
    	The actual parameters follow nOutputs and are specified by a combination of
    	parameter Name (PStr), type (int16*) and data pointer.
    
    	@example
    		CallInstrument(vi, kCI_ModalWindow, 2, 1,
    		               "\07Param 1", int16_Type_descriptor_in1, &p1,
    		               "\07Param 2", int16_Type_descriptor_in2, &p2,
    		               "\06Result", int16_Type_descriptor_res, &res);
    
    
    	@note This function does not allow the caller to specify a LabVIEW context in which to load
    	      and run the VI. Use a newer API (such as the C Interface to LV) to do so.
    
    	@note Valid values for windowFlags are:
    	        kCI_DefaultWindow  (0L)
    	        kCI_ModalWindow    (1L<<0)
    	        kCI_AppModalWindow (1L<<1)
    */
    TH_SINGLE_UI EXTERNC MgErr _FUNCC CallInstrument(Path viPath, int32 windowFlags, int32 nInputs, int32 nOutputs, ...);

     

    • Thanks 1
  10. On 10/1/2020 at 7:40 PM, flarn2006 said:

    Hedge, is that you? ūüėČ

    EDIT: If you don't get it, just pretend I'm insane. 


    Have you tried passing it a preallocated string?

    preAllocateStringsUIEnabled=True
    preAllocateUIEnabled=True
    preAllocateEnabled=True

    Right-click string control/constant, Set String Size

    They might actually be simply left overs from the LabVIEW developers before the Call Library Node gained the Minimum Size control for string and array parameters in LabVIEW 8.2. The old dialog before that did not have any option for this, so they might just have hacked it into the right click menu for testing purposes as that did not require a configuration dialog redesign, which might have been the main reason that this feature wasn't released in 8.0 (or maybe earlier already).

    There are many ini file settings that are basically just enablers for some obscure method to control a feature as it is still under development and once that feature is released the ini key does nothing or enable an option somewhere in the UI that is pretty much pointless now.

  11. On 9/26/2020 at 12:43 AM, Ryan Vallieu said:

    This has since changed.

    I am now compiling my LabVIEW code into .SO library and calling that from C as apparently I can only get xinetd to launch one instance of LabVIEW, the system must be set to run-at start-up, etc. etc.

    I can call any number of LabVIEW VIs from .SO through a C call and have them happily chugging away in their own app spaces.

    What I am missing still is how to get the STDIN/STDOUT through to the LabVIEW program.  I suspect if I was better at C this would be easy (easier?). Just trying out a simple demo at first so the LabVIEW code doesn't need to be a full-blown architecture.  I just need to get that damn "pipe" or File Descriptor.

    xinetd - C - LabVIEW Demo.png

    A bit of a wild guess but there is a function 

    MgErr FNewRefNum(Path path, File fd, LVRefNum *refNumPtr)

    exported by the LabVIEW kernel which takes a File, a Path (which could be an empty path as the File IO functions don't really use it themselves) and returns a file refnum that you can then use with the standard File IO functions.

    Now File is a LabVIEW private datatype but under Windows it is really simply a HANDLE and under Linux and MacOSX 64-bit it is a FILE*. So if you can manage to map your stdio fd somehow to a FILE* using some libc functions

    FILE *file = fdopen(fd, "w");

    you might just be lucky enough to turn your stdio file descriptor into a LabVIEW refnum that the normal LabVIEW Read File and Write File nodes can use.

    Also note that libc exports actually stdin, stdout and stderr as predefined FILE* handles for the specifc standard IO file descriptors so you may not even have to do the fdopen() call above.

    After you are done with it you should most likely not just call LabVIEW's Close File on the refnum as it assumes that the file descriptor is a real FILE* and simply calls fclose() on it. Maybe that is ok depending how you mapped the file descriptor to the FILE* but otherwise just use FDisposeRefNum(LVRfNum refnum) on the refnum and do whatever you need to do to undo the file desriptor mapping.

     

  12. 14 hours ago, ShaunR said:

    hmmmm. ;)

    No not really. I mean something quite different.

    Given a VI create a sort of function wrapper around it that works as a C function pointer. For that we would need something like

    MgErr CallVIFunc(VIDSRef viRef, int32 numInParams,  VIParams *inParams, int32 numOutParams, VIParams *outParams);

    with both parameters something like an array of

    typedef struct
    {
      LStrHandle controlName;
      int16 *typedesc;
      void *data;
    } VIParams;

    That way one could do a C function wrapper in assembly code that then converts its C parameters into LabVIEW parameters and then calls the VI as function.

    These are not actual functions that exist but just something I came up with. I'm sure something similar actually exists!

  13. 11 hours ago, Taylorh140 said:

    No this is definitely cool. It’s cool to know that there is a way to keep the data in the same scope. Now all you need is a (c compiler/assembler) written in labview along with prebuild actions and the world would be your oyster. 

    That's a bit ambitious!¬†ūüėÄ

    I would rather think something in the sense of the Python ctypes package to allow arbitrary function calls to DLLs including callbacks and such. We just need to find a method that does the opposite for this: calling a VI as C function pointer.¬†ūüėÄ

  14. On 9/21/2020 at 1:11 PM, Suquen said:

    Hi Jim Kring,

    Would you post the solution here? cos, the link seems expired!!!

    Thanks.

    Are you using LabVIEW 7.1????? If you use a newer version this should not fix any problem as this specific problem was actually fixed in LabVIEW 7.1.1. The problem was that the LabVIEW enumeration of directory entries assumed that the first two returned entries were the . and .. entries. Since LabVIEW 7.1.1 the Linux version doesn't have that assumption (the Windows version still had at least until recently and that can cause different problems when accessing a Samba mounted directory).

  15. On 9/21/2020 at 4:02 AM, flarn2006 said:

    Buried inside LabVIEW's resource files are several resources with the type "TMPL". They contain information that looks like it could be incredibly helpful in figuring out the structure of many of LabVIEW's internal resources. They're in a binary format, but it's quite trivial to parse, so I quickly put together a tool for loading and viewing them.

    Template Viewer.zip 94.15 kB · 7 downloads

    For more information, see this page, which appears to describe the same format: https://www.mathemaesthetics.com/ResTemplates.html (Change the URL from https to http; the forum won't let me add http links for some reason.)

    Well resources are really a MacOS Classic thing. LabVIEW simply inherited them and implemented their own resource manager so they could use it on non-Macintosh systems too. So that explains why the ResTemplate page seems to nicely describe the LabVIEW resources. It doesn't, it only describes the Macintosh resources and many of them are used in LabVIEW too.

    As to the resource templates, I looked at them way back in LabVIEW 5 or 6¬†and my conclusion was that most of them described the same types that a Macintosh resource file would describe too, but NI mostly left out the LabVIEW specific types. And it's not surprising, nobody was ever looking at them, so why bother?¬†ūüėÄ

  16. Trying to get at the data pointer of control objects, while maybe possible wouldn't be very helpful since the actual layout, very much like for the VI dataspace pointer has and will change with LabVIEW versions very frequently. Nothing in LabVIEW is supposed to interface to these data spaces directly other than the actual LabVIEW runtime and therefore there never has been nor is nowadays any attempt to keep those data structures consistent across versions. If it suits the actual implementation, the structures can be simply reordered and all the code that interfaces to external entities including saving and loading those heaps translates automatically to and from standardized (that includes changing multibyte data elements to Big Endian format) data.

    The MagicCookieJars used to be simply global variables but got moved into the Application Context data space with LabVIEW 8.0. I'm not aware of any function to access those CookieJar pointers. They did not exist prior to LabVIEW 8 as any code referencing a specific CookieJar was accessing them directly by its global address and I suppose there isn't any public interface to access any of them since the only code supposedly accessing them sits inside the LabVIEW runtime. The only external functions accessing such refnums either use well known, undocumented manager APIs to access objects (IMAQ Vision control) or use UserData refnums based on the object manager (has nothing to do with LabVIEW classes but rather with refnums) that reference their cookie-jar indirectly through the object class name.

    MCGetCookieInfo() requires a cookie jar, the actual refnum and returns the associated data space for that refnum. What that data space means can be very different for different refnums. For some it's simply a pointer to a more complex data structure that is allocated and deallocated by whatever code implements the actual refnum related functionality. For others it is the data structure itself. What it means is defined when creating the cookie jar, as the actual function to do so takes a parameter that specifies how many bytes each refnum needs for its own data storage. For interfaces managing their own data structures it simply uses sizeof(MyDataStructPtr) or more generally sizeof(void*) for this parameter, for interfaces that use the MagicCookie store for their entire refnum related data structure, they rather use sizeof(MyDataStruct) here.

    These interfaces all assume that the code that creates the CookieJar and uses those refnums is all the same code and there is no general need to let other code peek into this, so there is no public way to access the CookieJar. In fact if you write your own library managing your own refnums, you would need to store that cookie jar somewhere in your own code. That is unless you use object manager refnums. In that case things get even more complicated.

  17. 56 minutes ago, flarn2006 said:

    Ah right, I forgot about that. Though I swear I remember using a trial version on my Mac at some point back when I used one.

    Well they do have (ar at least had) an Evaluation version but that is a specially compiled version with watermark and/or limited functionality. 

     

    Quote

    I wonder though, do they really need anything elaborate for a license manager? I doubt it would be difficult to put something together from scratch. I guess it wouldn't necessarily be worth the effort for a free product though‚ÄĒhell, I wasn't expecting them to ever give away LabVIEW for noncommercial hobbyist use at all, as much as I hoped they would.

    The license manager included in the executable is only a small part of the work. The Windows version uses the FlexLM license manager but the important thing is the server binding to their license server(s). Just hacking a small license manager into the executable that does some verification is not that a big part. Tieing it into the existing license server infrastructure however is a major effort. And setting up a different license server infrastructure is most likely even more work. That is where the main effort is located. I have a license manager of my own that I have included in a compiled library (shared library part not the LabVIEW interface itself) and while it was some work to develop and making it work on all LabVIEW platforms, that pales in comparison to what would be needed to make an online license server and adding a real e-commerce interface to it would be even more work.

  18. 14 hours ago, flarn2006 said:

    It seems to think that LabVIEW 2020 isn't the latest version, saying an SSP subscription is required to download it. My guess is there's a bug where it thinks "2020 Patch" is the latest, even though it requires "2020" in order to install.

    Still though, it doesn't have Community Edition. But it does make me wonder about something: IIRC the only reason they don't have it available for Linux is because of technical issues with the license manager, rather than any desire to force people to use Windows. So if I were to download a different edition for Linux and crack it to work without a license, while I'm aware that would be against the EULA, would I really be violating the spirit of the EULA as long as I don't use it for anything I wouldn't be allowed to use Community Edition for? It would only be as a necessary part of a workaround for something they'd presumably allow if it were technically feasible, so maybe NI wouldn't care. That's how I see it anyway. Don't take that as legal advice though!

    LabVIEW on non-Windows platforms has no license manager built in. This means that if you could download the full installer just like that, there would be no way for NI to enforce anyone to have a valid license when using it. So only the patches are downloadable without a valid SSP subscription, since they are only incremental installers that add to an existing full install, usually replacing some files.

    That's supposedly also the main reason holding back a release of the Community editions on non-Windows platforms.

    I made LabVIEW run on Wine way back with LabVIEW 5.0 or so, also providing some patches to the Wine project along the lines. It was a rough ride and far from perfect even with the Wine patches applied but it sort of worked. Current Wine is a lot better but so are the requirements of current LabVIEW in terms of the Win32 API that it exercises. That NI package manager won't work is not surprising, it is most likely HEAVILY relaying on .Net functionality and definitely not developed towards the .Net Core specification but rather the full .Net release. I doubt you can get it to work with .Net prior to at least 4.6.2.

  19. 21 hours ago, LavaBot said:

    Why not use the LabVIEW's native Application Directory VI?  This automatically takes into account if you are in development mode or runtime (exe). 

    Generaly speaking this is fine for configuration or even data files that the installer puts there for reading at runtime. However you should not do that for configuration or data files that your application intends to write to at runtime. If you install your application in the default location (<Program Files>\<your application directory>) you do not have write access to this folder by default since Windows considers it a very bad thing for anyone but installers to write in that location. When an application tries to write there, Windows will redirect it to a user specific shadow copy, so when you then go and check in the folder in explorer you may wonder why you only see the old data from before the write. This is since on reading in File Explorer, Windows will create a view of the folder with the original files in the folder if they exist and showing the shadow copy files version only for those files that didn't exist to begin with. Also the shadow copy is stored in the user specific profile so if you login with a different user your application will suddenly see the old settings.

    Your writeable files are supposed to either be in a subdirectory of the current users or the common Documents folder (if a user is supposed to access those files in other ways, such as data files generated by your application), or in a subdirectory inside the current user or common  <AppSettings> directory (for configuration files that you rather do not want your user to tamper with by accident). They are still accessible but kind of invisible in the by default invisible <AppSettings> directory.

    The difference between current user and common location needs to be taken into account depending if the data written to the files is meant to be accessible only to the current user or to any user on that computer.

  20. On 7/23/2020 at 8:50 PM, Neil Pate said:

    I am about to reimplement my System Status actor from scratch. This time though though I am staying far away from the RT Get CPU Load and am going to try and read it using the linux command line (maybe "top" or similar). Urgh...

    Actually, I'm using the System Configuration API instead. Aside from the nisysconfig.lvlib:Initialize Session.vi and nisysconfig.lvlib:Create Filter.vi and nisysconfig.lvlib:Find Hardware.vi everything is directly accessed using property nodes from the SysConfig API shared library driver and there is very little on the LabVIEW level that can be wrongly linked to.

  21. 5 hours ago, ned said:

    Thanks for the notes! String to byte array wasn't an option because I needed to use a 32-bit wide FIFO to get sufficiently fast transfers (my testing indicated that DMA transfers were roughly constant in elements/second regardless of the element size, so using a byte array would have cut throughput by 75%). I posted about this at the time https://forums.ni.com/t5/LabVIEW/optimize-transfer-from-TCP-Read-to-DMA-Write-on-sbRIO/td-p/2622479 but 7 years (and 3 job transfers) later I'm no longer in a position to experiment with it. I like the idea of implementing type cast without a copy as a learning experience; I think the C version would be straightforward and pure LabVIEW (with calls to memory manager functions) would be an interesting challenge.

    I haven't benchmarked the FIFO transfer in respect to element size, but I know for a fact that the current FIFO DMA implementation from NI-RIO does pack data to 64-bit data boundaries. This made me change the previous implementation in my project from transfering 12:12 bit FXP signed integer data  to 16-bit signed integers since 4 12-bit samples are internally transferd as 64-bit anyhow over DMA, just as 4 16-bit samples are. (In fact I'm currently always packing two 16 bit integers into a 32-bit unsigned integer for the purpose of the FIFO transfer, not because of performance but because of the implementation in the FPGA which makes it faster to always grab two 16-bit memory locations at once and push them into the FIFO. Otherwise the memory read loop would take double as much time (or require a higher loop speed) to be able to catch up with the data acquisition. 64 12-bit ADC samples at 75 kHz add up to quite some data that needs to be pushed into the FIFO.

    I might consider to push this up to 64-bit FIFO elements just to see if it makes a performance difference, but the main problem I have is not the FIFO but rather to get the data pushed onto the TCP/IP network in the RT application. Calling directly libc:send() to push the data into the network socket stack rather than through TCP Write seems to have more effect.

  22. 46 minutes ago, Neil Pate said:

    @Rolf Kalbermatter it was a few years ago now, but if I recall correctly it was a known issue that requesting a fixed number of elements from a DMA buffer caused the CPU to poll unnecessarily fast while it was waiting for those elements to arrive. I will see if I can find the KB.

    https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P9SASA0&l=en-US

    That is specifically for RT but I have definitely seen this on Windows and FPGA also.

    Ahhh, I see, blocking when you request more data than there is currently available. Well I would in fact not expect the Acquire Read Region to perform much differently in that aspect. I solved this in my last project a little differently though. Rather than calling FIFO Read with 0 samples to read, I used the <remaining samples> from the previous loop iteration to calculate an estimation for the amount of samples to read similar to this formula (<previous remaining samples> + (<current sample rate>  * <measured loop interval>) to determine the number of samples to request. Works flawlessly, saves a call to Read FIFO with 0 samples to read (which I do not expect to take any measurable execution time, but still). I need to do this since the sampling rate is in fact externaly determined through a quadrature encoder so can dynamically change in a pretty large range.

    But unless you can do all data intense work inside the IPE as in the example you show, the Acquire FIFO Read Region offers no advantage in terms of execution speed to a normal FIFO Read.

  23. On 8/7/2020 at 12:52 AM, ned said:

    I really wanted to use that function a few years back, but it wasn't available on the cRIO I was using. In case it's helpful, here's the situation in which I'd hoped to use it:
    We were using a cRIO to drive an inkjet print head. The host system downloaded a bitmap to the cRIO, which the cRIO then sent to the FPGA over a DMA FIFO. I used a huge host-side buffer, large enough to store the entire bitmap; the FPGA read data from that FIFO as needed. I benchmarked this and it required 3 copies of the entire bitmap, which could be several megabytes: one copy when initially downloaded; one copy for the conversion from string (from TCP Read) to numeric array (for the FIFO Write); and one copy in the FIFO buffer. These memory copies were one of the limiting factors in the speed of the overall system (the other was how fast we could load data to the print head). If I had been able to use "Acquire Write Region" I could have saved one copy, because the typecast from string to numeric array could have written directly to the FIFO buffer. If there were some way to do the string to numeric array conversion in-place maybe I could have avoided that copy too.

    Actually there is another option to avoid the typecast copy. Because Typecast is in fact not just a type reinterpretation like in C but also does byte swapping on all Little Endian platforms which are currently all but the old VxWorks based cRIO platforms, since the use a PowerPC CPU which by default operates in Big Endian (this CPU can support both Endian modes but  is typically always used in Big Endian mode.

    If all you need is a byte array then use the String to Byte Array node instead. This is more like a C Typecast as the data type in at least Classic LabVIEW doesn't change at all (somewhat sloppily stated: a string is simply a¬†byte array with a different wire color ūüėÄ). If you need a typecast sort of thing because your numeric array is something else than a byte array, but don't want endianizing you could with a bit of low level byte shuffling (preferably in C but with enough persistence it could even be done in LabVIEW diagram although not 100% safe) you could write a small function that swaps out two handles with additional correction of the numElm value in the array and do this as a virtually zero cost operation.

    I'm not sure the Acquire Write Region would save you as much as you hope for this. The DVR returned still needs to copy your LabVIEW data array into the DMA buffer and there is also some overhead from protecting the DVR access from the DMA routine which will attempt to read the data. Getting rid of the inherent copy in the Typecast function is probably more performant.

    On 7/26/2020 at 12:00 AM, Neil Pate said:

    Can anyone shed some light for me on the best practices for the FIFO Acquire Read Region technique? I have never used this before, I always have just done the usual trick of reading zero elements to get the size of the buffer and then reading if there are enough elements for my liking. To my knowledge this was a good technique and I have used it quite a few times with no actual worries (including in some VST code with a ridiculous data rate).

    This screenshot is taken from here.

    Is this code really more efficient? Does the Read Region block with high CPU usage like the Read FIFO method does? (I don't want that)

    Has anyone used this "new" technique successfully?

    Why would the Read FIFO method block with high CPU usage? I'm not sure what you refer to here. Sure it needs to allocated an array of the requested size and then copy the data from the DMA buffer into this array and that takes of course CPU but if you don't require more data than there is currently in the DMA buffer it does not "block", it simply has to do some considerable work. Depending on what you are then doing with the data you do not save anything by using the Acquire Region variant. This variant is only useful if you can do all of the operation on the data inside the IPE in which you access the actual data. If you only do use the IPE to read the data and then pass it outside of the IPE as normal LabVIEW array there is absolutely nothing to be gained by using the Acquire Read Region variant. In the case of the Read FIFO, the array is generated (and copied into) in the Read FIFO node, in the Acquire Read Region version it is generated (and copied into) as soon as the wire crosses the IPE border. It's pretty much the same effort and there is really nothing LabVIEW could do to avoid that. The DVR data is only inside the IPE accessible without creating a full data copy.

    I did recently a project where I used a Acquire Read Region but found that it had no real advantage to the normal FIFO Read, since all I did with the data was in fact to pass it on to a TCP Read. As soon as the data needs to be send to TCP Read, the data buffer has to be allocated anyhow as a real LabVIEW handle and then it doesn't really matter if that happens inside the FIFO Read, or inside the IPE accessing the DVR from the FIFO Region.

    My loop timing was anyhow heavily dominated by the TCP Write. As long as I only read the data from the FIFO, my loop could run consistently at 10.7MB/s with a steady 50ms interval with very little jitter. As soon as I added the TCP Write the loop timing jumped to 150 ms an steadily increased until the FIFO was overflowing. My tests showed that I could go up to 8MB/s with a loop interval timing of around 150 ms +- 50ms jitter without the loop starting to run off. This was also caused by the fact that the ethernet port was really only operating at 100Mb/s due to the switch I was connected to not supporting 1Gb/s. The maximum theoretical throughput at 100Mb/s is only 12.5MB/s and the realistic throughput is usually at around 60% of that. But even with a 1Gb/s switch the overhead of TCP Write was dominating the loop by far, making other differences including the use of an optimized Typecast without any Endian normalization compared to the normal LabVIEW Typecast which did Endian normalization fall into unmeasurable noise.

    And it's nitpicking really and likely only costs a few ns execution time extra but the calculation of the number of scans inside the loop to resize the array to a number of scans and number of cannels should be all done in integer space anyhow and using the Quotient & Reminder. Not to much use in using Double Precision values for all these for something that inherently should be integer numbers anyhow. There is even a potential for a wrong number of scans in the 2D array since the ToI32 conversion number does standard rounding, so could end up one more than there are full scans in the read data.

  24. On 8/10/2020 at 1:00 AM, emcware said:

    After living with this for 5 years, I have finally figured out the solution for the yellow plots. The root-cause of the yellow plots appears to be that the standard 256 color palette contained in a cluster coming out of an invoke node (FP.Get Image) become corrupted by the application builder on Mac OS X.   The colors when running source code vs. when build into an application are very different.  My workaround is to bundle the 256 colors directly in the source code.    Of course,  I am using 8-bit color.  I would assume that if a higher bit-depth is needed, then a different set of reference colors would need to be bundled but I have not investigated that.  

    I also suspect the yellow icons are similarly corrupted but I don't know how or if I could solve that.  But at least now I can save my plots using LabVIEW 2020.  I hope this helps someone else that has been frustrated with this.  A screenshot of the work-around is below.  I also have attached a screenshot showing the yellow icon.

    The application builder uses internally basically a similar method to retrieve the icons as an image data cluster to then save to the exe file resources. So whatever corruption happens in the build executable is likely the root cause for both the yellow graph image and the yellow icon. And it's likely dependent on something like the graphics card or its driver too or something similar as otherwise it would have been long ago found and fixed (if it happened on all machines).

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.