Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. I've seen similar things occcasionally also in at least LV2013 but nothing that I could pinpoint and so far I think only in applications I received from others and most likely also were originally from an earlier version.
  2. That means that there is still something wrong. Either one or more Call Library nodes are still configured wrong or there is a bug in the flash dll somewhere. The most likely culprit is a badly configured Call Library Node. Have you made sure that any function which returns information in a string or array buffer is called with a properly allocated buffer? If your buffer is even one byte smaller than what the function is going to try to write into, it will inevitably overwrite some memory that will destroy something. This can often result in a 1097 error if the overwriting is serious enough but also can go unnoticed until you try to close LabVIEW and when trying to clean everything up, it stumbles over the corrupted pointers. Or it can crash somewhere between where the overwriting happens and closing of LabVIEW. And if the overwriting is not affecting pointers it may be in data that your program uses to do calculations elsewhere.
  3. From what John describes in the first post, I would be surprised if his application gets even remotely close to 1GB of memory consumption. In my experience you only get above that when Vision starts getting involved. That or highly inefficient programming with large data arrays that get graphed, analysed and what else in a stacked sequence programming style.
  4. Well!!!! If you add a call to FlashErrorText() after each failed function call you will find out that it first reports after the FlashSetupAndConnect(): then after the FlashErase(): which is logical since the SetupAndConnect call had failed. So what does this tell us? The flashaccess.dll attempts to find the file cpu.ini in the directory for the current executable. Unless there is a way to tell the DLL in the ocd file to look for this elsewhere, you may be required to put this file in the directory where your LabVIEW.exe file resides (and if you build an executable , also into the directory where your executable will be). Basically it is a bit stupid from the DLL to look for this in the executable directory only and not at least also in the DLL directory itself, but alas such is life.
  5. See above in my edited post. And from the example project you included it seems that cdecl is indeed the right calling convention.
  6. Well I got rid of the dynamic path in the diagram and simply pointed the Call Library Node to the DLL on my system and then LabVIEW ends up with broken arrow for the VI claiming it couldn't load the DLL. Well disregard this remark. Typical PEBCAK problem . Should have noticed that the VI got opened in LabVIEW 64 Bit rather than 32 Bit. I have edited the VI in a way that I should think should work. Seems that LabVIEW feels the functions need to be called as cdecl. Not sure why since the assembly code seems to hint otherwise, but whatever. I now get a return value of 1 for the Disconnect call, which sounds not to bad. Obviously other than what you believe, the FlashSetupAndConnect() call has to fail on a system with no hardware to connect to! Just adapt the path generation to your conf file in a way that is working for your installation. usbWiggler.vi
  7. What memory do you have in your machine? For the FPGA compiler it REALLY makes a difference if you can throw more memory at it.
  8. There is absolutely no need for all the pointer stuff you are doing. The Call Library Node is very capable of translating LabVIEW strings into C string pointers for the duration of the Call Library Node call. Your own managed pointers would only be necessary if the lifetime of that pointer is required to last beyond the call of the function itself. So get rid of all the pointer acrobatic and just use the code in the true case. The DLL doesn't load on my system since it is compiled using Borland C and probably requires the Borland C Runtime library installed on the computer, which I have no plans to install on my system. However taking a quick look at the assembly would indicate that it might be compiled to use stdcall convention for all its functions. The header files or the MS Visual C example mentioned in the documentation would certainly help to verify what calling convention is supposed to be the right one. Also the return value of those functions is defined to be int, which under all modern Windows versions is a 32 bit integer. Your function thinking it's a int16 certainly might miss some interesting bits that way. Look at this declaration in the documentation: An int is still a 32 bit integer, and not a 64 bit integer as you have decided to make it here for the parameters (and still use an int16 for the return value)!!!! Last but not least: If you have several files to attach, with some of them being not possible to post because of their ending being rejected it is quite a good idea to pack everything into a ZIP file and post that rather than renaming files to make them appear as something they are not, and having to explain how to do all the renaming back in order to get the right files.
  9. Depends what instruments that were. The key here is that they are USB, and lacking any specific USB Raw setup in your diagram, must be Virtual Comm devices, which means VISA does in fact very little itself other than talking to the Windows COMM API which then calls into either the standard Windows Virtual COMM USB driver or a specific Agilent/Keysight virtual device driver. Which one it is I have no idea. While VISA may be part of the problem I have seen all kinds of weird and unpleasant things happening with Virtual COMM USB drivers from various manufacturers. I have seen very little problem with parallel or any other type of VISA communication with other devices than COMM USB devices and since VISA really just treats them as any other serial port the problem very likely has to be searched in the USB COMM device driver, either the Windows standard driver or most likely a vendor specific device driver for the instrument you are using. Basically your instrument is pretty much the same as any of those RS-232 to USB converter dongles, and there it makes a big difference if one uses a noname product with unknown internal controller or one based on for instance the FTDI solution. While none of the standard drivers that come with the SDK for such chips is really meant to be distributed by OEMs to their clients, most (especially no-name manufacturers) do so anyhow as you really can't hire a programmer to improve a driver when you earn basically nothing on the sale of a product already and from the ones I've seen only the FTDI driver works reasonably well enough to not crash under any but very ideal situations. Another indication for this is the fact that LabVIEW simply disappears. No crash that can be produced in user space only is really able to terminate a process in such a way under modern Windows systems. This only can happen if a kernel driver is violating some critical system integrity while being called by the process directly or indirectly. And the only kernel component aside from normal Windows kernel handling in this setup would be the USB Virtual COMM port driver or some other part of the USB driver stack. This really only leaves two options for the cause of this crash: A buggy chipset driver for your system itself or a buggy USB virtual comm driver for your instruments. Both of them are completely out of control of VISA and even more so for LabVIEW. And while USB can potentially allow faster communication speeds than GPIB it is even less parallel than GPIB. In USB each bit has to go through the same line, while with GPIB there are 8 parallel datalines. Also both USB and GPIB do allow to communicate to several devices quasi-parallel. And since the USB port is really just used as a virtual COMM port in these cases the bit speed (baudrate) is typically limited to values way beyond what you could reach with GPIB.
  10. Access Violation is a generic error that is generated by the CPU itself when a code execution causes the CPU to access a memory address that the virtual memory manager does not recognize as being assigned to the current process. Often it is a NULL pointer that is referenced, but any badly initialized pointer can be the culprit. It simply means that something got corrupted in the application memory, but there is no way to determine how it happened from the access violation exception information alone.
  11. Yes I installed all my LabVIEW versions into the C: SSD. But then I got a 500GB SSD Mini-PCI card in my notebook besides the 500GB hybrid HD Sorry guys, but couldn't let this pass!
  12. Actually NTFS supported symlink like features since its early inceptions, but there were only a few very obscure Windows APIs that allowed to create them properly. Recent NTFS versions improved on that a bit and Microsoft also added support for them into the shell. For all practical purposes support for symlinks in anything earlier than Windows 7 and 2008 most likely isn't of any interest anyhow since they are all unsupported OSes by now. The remark about supported OSes for a certain functionality is anyhow often misleading in MSDN. For one the documents tend to get outdated (notice the absence of Windows 7 which is more or less simply Vista but in a usable form) while on the other hand Microsoft tends to also update the documentation regularly changing support information to only mention the latest versions, even though that API or functionality really existed already much earlier. Most Windows APIs on MSDN claim to be supported since Windows 7 by now even if they already existed in Windows 95.
  13. How do you read the characters from the Excel file? What Excel file is it? Basically xls files use binary OLE streams for data which stores strings as OLECHAR which is basically UTF16. xlsx files use xml with UTF8 encoding. But your problem is most likely that you use the ActiveX interface to Excel. Here LabVIEWs own smartness likely plays you some tricks since the strings provided by the Excel ActiveX interface are automagically translated into whatever is your current default mbcs codepage that you have configured for your Windows account. While LabVIEW can support Unicode in its string controls with the unsupported ini file setting, it's very much possible that this support does not extend to the ActiveX interface in LabVIEW and ActiveX being designed as idiot proof interface doesn't allow you to change that behavior.
  14. While that is generally true it is IMHO a pain in the ass with no real advantage other than not requiring you do write a little C code and run it though a C compiler. Of course for someone who has no C knowledge, this option is all that is available to them, save from hiring a C programmer, but it is a bad choice for a lot of reasons. First, you need to know quite a bit about C programming anyhow to be able to make this work reliable. Second, the DLL will for all practical purposes not only run in a separate application instance but many times in a separate process. When you upgrade your LabVIEW code to a new LabVIEW version, the DLL needs to either be recompiled too every time or it will run in a separate LabVIEW runtime process that has to be the same version as in what the DLL was created. So unless you upgrade your DLL too, you will have to remember to install the runtime version for your DLL and the one for your application. Consider an app using more than one such callback functionality and you easily end up having to install several LabVIEW runtime versions after some progressive development of your app. And moving platforms (eg, Windows 32 bit to 64 bit) will most likely have every other user of your callback solution stumped, since the LabVIEW created DLL is somewhat unintuitive for most casual LabVIEW users (and illogical for more advanced programmers).
  15. While NI goes to pretty extreme lengths to mutate code that has changed between versions when upgrading to a newer version the reverse is not as thorough. It possibly can be considered a bug, but the save for previous has never and won't ever guarantee that code will backsave without broken arrow in the older version.
  16. Basically using the calls to MultyByteToUnicodeString() and UnicodeToMultiByteString() Windows APIs you can do every possible conversion from and between an MBCS encoding known to Windows. These functions accept as one of their parameters the codepage that the MBCS text is in. By default, one passes the CP_ACP constant there, which tells Windows to use the current user codepage, but if you know that your text is in another different codepage you have to pass in the according constant for that parameter to MultyByteToUnicodeString() and end up with UTF16 encoded string in the output.
  17. Why should someone post a working ini file verbatim in a power presentation? The person creating this presentation remembered that PPFs are some restricted feature and simply changed a few characters.
  18. Never having tried to look at the Project Provider Framework at all I can't really say for sure, but I would assume that in order to verify that a PPF is valid this check is done on every load, so is in the provided PPF base framework. With flarn having admitted to have broken password protection before it seems not so hard to guess how it all went. And yes, PPFs have the potential to wreck a LabVIEW installation completely and even worse modify code on the fly in a way that is very hard to detect. So this "signing" business is most likely much less about NI not wanting developers to be able to add plugins, but rather safeguarding those customers who have VERY stringent requirements about approved software running on their systems. They are out there and they have rules that even forbid to install OpenG VIs since they are not from an officially approved source.
  19. The link gives an error and the main site is found suspicious by McAfee.
  20. The easiest is most likely to use the command line tool of whatever GIT client you install. I do the same for SubVersion calling svn.exe with svn status --show-updates --verbose Parsing the return string is some work but easily doable in a generic manner that will work well. I'm absolutely sure that GIT works the same and this will give you a very flexible and easy to do interface without any need to go .Net, etc. Most of the tools you would otherwise integrate take the command line approach too in the end.
  21. VI Server is meant to work between LabVIEW versions and platforms transparently. There shouldn't really be anything that could break. Well there used to be properties such as for platform window handles that used to be 32 bit only until LV 2009. They are now depreciated but still are accessible and if you happen to use them you could run into difficulties when moving to 64 bit platforms and trying to access them, remotely or locally.
  22. Some of these objects have existed in LabVIEW for a long time and never done much more than crashing it! I assume they are cruft left over from some experiments that either were abandoned at some point or the guy who sneaked them in had suddenly left and nobody ever noticed it. The whole LabVIEW code base is huge and no single person on this world has a complete overview of everything that is inside it.
  23. No, private nodes are yellow LabVIEW nodes that are not available in the standard palette (but can be generated with your "Generate VI Object method"). The idea is that your external shared library somehow creates an object reference somehow (usually a pointer to whatever your shared library finds suitable to manage your object) and then this object reference needs to be assigned to the user tag reference. This can be either done on the LabVIEW diagram with such a node after the call to the shared library function to create the object reference or inside the shared library itself by calling undocumented LabVIEW manager functions. Consequently there are matching LabVIEW diagram nodes or manager API calls to deregister an object reference from a user tag. But again unless you intend to start writing shared libraries (C/C++ programming) to allow access to some sort of device, or other functionality, this really isn't interesting at all to you.
  24. It's a generic user tag refnum. The functionality behind it is relying on information found in the resource/objmgr directory inside LabVIEW. Basically the rc files in there can define an object hierarchy and for each object type you can define methods, properties and events that map to specific exported functions from a shared library. Once the generic tag refnum has been selected to represent a specific object type from one of the object hierarchies it is not generic anymore and you can not select other object types anymore from other object hierarchies. Also flags in the object type inside the rc allow to specify if the user is allowed to even select any other object type within the object hierarchy. It's all pretty involved and complicated (a single error in the rc file usually makes it completely useless and you can go and start guessing were the error is. To interface in a shared library to a LabVIEW VI user tag refnum you also either need to use some private diagram nodes to register the session handle returned from your shared library to a user tag refnum and also one to deallocate it, or use internal LabVIEW manager functions to do that. But unless you write drivers for some kind of interface in external shared libraries, the user tag refnum has no practical meaning for you at all. And it requires your shared library to be explicitly written to work with LabVIEW, it's not a means to interface LabVIEW to standard shared libraries at all.
  25. If it seems limited to your PC then the most likely suspect would seem the network card and according driver in that PC. Wouldn't be the first time that network card drivers do buggy things. Maybe it has to do with jumbo frame handling. Try to see if you can disable that in the driver configuration. As far as I know cRIO doesn't support jumbo frames at all, so there shouldn't be any jumbo frames transmitted, but it could be that an enabled jumbo frame handling in the driver does try to be too smart and reassemble multiple TCP packets into such frames to pass to the Windows socket interface.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.