-
Posts
3,903 -
Joined
-
Last visited
-
Days Won
269
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Is it? Then there would be indeed a discrepancy between when the front panel update is executed and when the debug mechanism considers the data to be finally going through the wire. Which could be considered a bug strictly speaking, however one in the fringes of "who cares". I guess working 25+ years in LabVIEW, such minor issues have long ago ceased to even bother me. My motto with such things is usually "get over it and live with it, anything else is bound to give you ulcers and high blood pressure for nothing".
-
Reset Low level TCP connection on LV2018
Rolf Kalbermatter replied to Bobillier's topic in LabVIEW General
Ahhh I see, that one had however no string input at that point. But now it's important to know on which platform this executes!! I don't think this VI is a good method to use in implementing a protocol driver, given LabVIEWs multiplatform nature. The appended EOL will depend on the platform this code runs, while your device you are talking with most likely does not care if it is contacted by a program running on Windows, Mac or Linux but simply expects a specific EOL. Any other EOL is bound to cause difficulties! -
internal warning 0x occurred in MemoryManager.cpp
Rolf Kalbermatter replied to X___'s topic in LabVIEW General
You either have a considerably corrupted LabVIEW installation or are executing some external code (through Call Library Node or possibly even as ActiveX or .Net component) that is not behaving properly and corrupts memory! At least for the Call Library Node it could be also an incorrect configuration of the Call Library Node that causes this. Good luck hunting for this. The best way if you have no suspicion about possible corrupting candidates is to "divide and conquer". Start to exclude part of your application from executing and then run it for some time and see if those errors disappear. Once you found a code part that seems to be the culprit, go and disable smaller parts of that code and test again. It ain't easy but you have to start somewhere. Just because your program doesn't crash right away when executing (or the test VIs that come with such an external code library) does not provide any guarantee that it is fully correct and not corrupting memory anyhow. Not every corruption leads immediately to a crash. Some may just cause slight (or not so sligh) artefacts in measurement data, some may corrupt memory that is later accessed (sometimes as late as when you try to close your VIs or shutdown LabVIEW and LabVIEW dutifully wants to clean all resources, and then stumbles over corrupted pointers, and only when serious things like stack corruption happen do you usually get an immediate crash. -
You are trying to force your mental model onto LabVIEW data flow. But data flow does not mandate or promise any specific order of execution not strictly defined by data flow itself. A LabVIEW diagram typically always processes all input terminals (controls) and all constants on the top level diagram (outside any structure) first and then goes to the rest of the diagram. The last thing it does is processing all indicators on the top level diagram. There is no violation of any rule in doing so. Updating front panel controls before the entire VI is finished is only necessary if the according terminal is inside a structure. Clumping the update of all indicators on the top level diagram into one single action at the end of the VI execution does not delay the time the VI is finished but can save some performance. It also has to do with the old rule that it is better to put pass through input and output terminals on the top level diagram and not bury them somewhere inside structures, aside from other facts such as readability and the problem of output indicators potentially not being updated at all for subVIs, retaining some data from previous execution and passing that out.
-
Reset Low level TCP connection on LV2018
Rolf Kalbermatter replied to Bobillier's topic in LabVIEW General
What is that icon with the carriage return/line feed doing? Any local specific code in there? If your other side expects a carriage return, linefeed or both together specifically and just ignores other commands you could get similar behaviour. -
Something is surely off here: You say that the checksum is in the 7th byte and the count in the 8th. But aside that it is pretty stupid to add the count of the message at the end (very hard to parse a binary message if you don't know the length, but you only know that length if you read exactly the right amount of data), those 70 71, 73 and so on bytes definitely have nothing to do with the count of bytes in your message. Besides what checksum are you really dealing with? A typical CAN frame uses a 15 bit CRC checksum. This is what the SAE_J1850 fills in on its own and you can't even modify when using that chip. It would seem that what you are dealing with is a very device specific encoding. There could be a CRC in there of course, but that is totally independent of the normal CAN CRC. As far as the pure CAN protocol is concerned, your 8 bytes of the message are the pure data load, and there is some extra CAN framing around those 8 bytes, that you usually never will see on the application level. As such adding an 8 bit CRC to the data message is likely a misjudgement from the device manufacturer. It adds basically nothing to the already performed 15 bit CRC checking that the CAN protocol does itself one protocol layer lower.
-
Is that other application a normal Windows app or a LabVIEW executable? In most normal Windows apps each control (button, numeric etc) is really another window that is a child window of the main window. LabVIEW applications manage their controls entirely themselves and don't use a window for each of them. They only use a window for the main window of each front panel and for ActiveX/.Net controls.
-
1D Array to String not compiling correctly
Rolf Kalbermatter replied to G-CODE's topic in OpenG General Discussions
LabVIEW for Mac OS Classic used actually a "\r" character for this, since the old MacOS for some reason was using yet another EOL indicator. This is strictly speaking not equivalent to the originally intended implementation since this would remove multiple empty lines at the end while the original one only removed one EOL. However the Array to Spreadsheet String function should not produce empty lines (except maybe if you enter a string array of n * 0 elements) but that does not apply for this 1D array use case. -
Importing shared Library .so in Linux.
Rolf Kalbermatter replied to Yaw Mensah's topic in Code Repository (Certified)
LabVIEW doesn't handle name mangling at all. It would be an exercise in vain to try since GCC does it differently than MSVC, just to name two. There is no standard in how names should be mangled and each compiler uses its own standard. There is some code in LabVIEW that tries to work around some common decorations to functions names such as a prepended underscore. If the Call Library Node can't find a function in the library it tries to prepend an underscore to the name and tries again. Stdcall compiled functions also have by default a decoration at the end of the functions name. It is an ampersand (@) followed by a number that basically indicates the number of bytes that have to be pushed on the stack for this function call. LabVIEW tries to match undecorated names to names with the stdcall decoration if possible but that can go wrong if you have multiple functions with the same name but differently sized parameters. It's also not possible to compile that in standard C but C++ could theoretically do it, but I never saw it in the wild until now. One other speciality that LabVIEW does under Windows is if a function name is only a decimal number and nothing else. In that case it interprets the name as ordinal number and calls the function through its ordinal number. This is an obscure Windows feature that is seldom used nowadays but Windows does have some undocumented functions that are only exported by an ordinal number. Every exported function has an ordinal number, but not every exported function does need an exported function name. This was supposedly mostly to save some executable file size, since the names in the export table can mount up to shockingly many kB's of data. What the OP describes seems to be some of these algorithms trying to match a non-existing function name to one which is present in the library, going somehow havoc. -
Communication with hardware using Serial connection (RS232)
Rolf Kalbermatter replied to Shaun07's topic in LabVIEW General
There are about 500+ ways to tackle this. You could use a Scan from String with the format string AOM=%d. Or with the format string %d and the offset with 4 wired. Or a little more flexible and error proof, first check for where the = sign is and use the position past that sign as input to the Scan from String above. -
[CR] MPSSE.dll LABview driver
Rolf Kalbermatter replied to Benoit's topic in Code Repository (Uncertified)
It does for me. -
There are various packages around that do different things to different degree. Most of them are actually APIs that existed in the early days of Windows already. Things that come to mind would be querying the current computer name, user name or allowing to actually authenticate with the standard Windows credentials. Others are dealing with disk functionality although the current LabVIEW File Nodes offer many of those functionalities since they were reworked in LabVIEW 8.0. then there are the ubiquitous window APIs that allow to control LabVIEW and other application windows. As far as LabVIEW is concerned most of those things are since a long time also possible through VI server, but the possibility to integrate an external app as child window can be interesting sometimes (although in many cases a questionable design choice). There are of course many newer Windows APIs too but a lot of them are either to specialistic or to complex to be of much interest to most of the people. For instance there have been many attempts at accessing Bluetooth Low Energy APIs in Windows, but that API is complex, badly documented and riddled with compatibility issues, which would make a literal translation with a LabVIEW VI per API function a very useless idea for almost every LabVIEW user. But once you start to go down the path of creating a more LabVIEW user friendly interface, it is usually easier to develop a wrapper library in C that takes care of all the low level hassles and provides a more concise and clean API to the LabVIEW environment, so that one does not have to do very involved C compiler acrobatic tricks on the LabVIEW diagram to satisfy the sometimes rather complicated memory manager requirements of those APIs. The problem with the original idea of the metaformat to allow importing Windows APIs is however that the difficulties of using such an API is very often not limited to making it available in LabVIEW through a Call Library Node. That is indeed specialistic work and can require a deep understanding of C compiler details and memory manager techniques, but the really difficult part is to understand how to use those APIs properly and that use also influences often how you should actually import the API as dadreamer pointed out with the example of string or array pointers that can either be passed a NULL value or the actual pointer. It would be almost impossible to provide this functionality in the current Call Library Node implementation, since LabVIEW does not make a difference between an empty string (or array) and a NULL pointer (in fact LabVIEW has to the outside nothing like a NULL pointer since it does not provide pointers). Internally one of the many optimizations is that all LabVIEW functions are happily accepting a NULL handle to mean the same as a valid handle with the length value set to 0. How to tell LabVIEW that an empty string should pass a NULL value in one case and a pointer to an empty string in the other? Some APIs will crash when being passed a NULL pointer, others will do weird things when receiving an empty C string pointer. And that might not even be something that you want to have in the Call Library Node configuration since there exist APIs that understand a NULL pointer to have a very different meaning (such as to use a specific default value for instance) than what an empty C string will do (omitting this parameter for instance). This has to be programmed explicitly anyways by using a case structure for both cases and only a human can make this decision at this point based on the prosa text in the function documentation. A metaformat can say if a pointer is allowed to be a NULL value or not, but being allowed to be NULL doesn't mean automatically that an empty string should be passed as NULL value. And that all is not really possible to describe in a metaformat in a way that is parsable by an automated tool. It can be described in the prosa text of the Programmer Manual to that function, but often isn't or in a way that says nothing more than what the names of the parameters already suggest anyways. And as mentioned, Windows APIs make only a small fraction of the functions that I have imported through the Call Library Node in my over 25 years of LabVIEW work
-
NI abandons future LabVIEW NXG development
Rolf Kalbermatter replied to Michael Aivaliotis's topic in Announcements
That was very lame of Indy! 😆 -
NI abandons future LabVIEW NXG development
Rolf Kalbermatter replied to Michael Aivaliotis's topic in Announcements
While I prefer non-violence, I definitely feel more for a sword than a gun. It has some style. 😎 -
The potential is there and if done well (which always remains the question) might be indeed a possibility to integrate into the import library wizard. But it would be a lot of work in an area that very few people really use. The Win32API is powerful but not the complete answer to God, the universe and everything. Also there are areas in Windows you can not really access through that API anymore. In addition I wonder what the motivation at Microsoft is for this, considering their push to go more and more towards UWP, which is an expensive abbreviation for a .Net only Windows kernel API without an underlying Win32 API layer as in the normal desktop version. And at the same time and purely for users safety (of course 😄), also a limitation to only install apps from the Online Windows Store. 😀 But the Win32API is only a very small portion of external code you will typically want to interface to. It does not cover your typical device instrument driver that comes in DLL form, often with the latest update 10 years ago and with the only support in the form of obscure online posts with lots and lots of guesswork and wrong assumptions. Also it won't help with shared libraries from Open Source projects, unless some brave contributor dives into the topic and creates that meta description for it. But creation of such a meta description for an API is a lot of very specialistic and almost impossible to automate work. Microsoft may really do it, I'm still sceptical if it will be really finished, but most others will probably throw their hands in the air in despair at the first moment they lay eyes on the documentation of this meta description format. 😎 In the best case it will be several years before there is any significant adoption of this by other parties than Microsoft, and only if Microsoft is actively going to lobby for it by other major computer software providers. In reality someone at Microsoft will likely invent some hand brakes in the form of patents, or special signing authorities, that makes adoption of this by other parties extra difficult. And it won't help a single yota for Linux and Mac platforms, which are also interesting to target with the Call Library Node.
-
Git can't be this terrible, what am I doing wrong?
Rolf Kalbermatter replied to drjdpowell's topic in Source Code Control
My experience comes from the other side. We have started with SVN some 15 years ago or so and while it has limits it worked quite well for us. There are additional limitations because of the binary character of LabVIEW files, but GIT doesn't add anything to the game there really. I have been exposed to GIT in recent years because of customers using it and also from dabbling in some Open Source projects that use GIT and I have to say the experience is overwhelming at first. SVN certainly has its limits but it's working model is pretty simple and straightforward to understand. And it has always worked reliably for me, which I would consider the most important aspect of any source code control system. One of the reasons Microsoft Source Safe has failed miserably was the fact that you could actually loose code quite easily and the Microsoft developers seemed to have completely lost all motivation to work on those hard to tackle problems. With GIT I have painted myself into a corner more than once and while there are almost always some ways to do a Baron Munchausen trick and drag yourself out of that corner by your own hair, it's been sometimes a pretty involved exercise. 😀 -
NI abandons future LabVIEW NXG development
Rolf Kalbermatter replied to Michael Aivaliotis's topic in Announcements
I would agree with Shaun. Looking at the link you provide, the podcasts on that page are typical marketing speak. Many nice words about a lot of hot air. Now, I'm not going to say it is unimportant in nowadays time. Much of our overhyped and overheated economy is about expensive sounding words and lots and lots of people wanting to believe them. In so far it certainly has influence and importance, no matter how artificial it may be. The problem I see with that is that there is little substantial content in the form of real assets that back the monetary value of the whole. It's mostly about abstract ideas, a strong believe system in them much like a religion and very little real substance. Anyone remembers the Golden Calf from the bible? I sometimes wonder if we would have seen something like the 1999 and 2008 crash already, if we hadn't been all consumed with the pandemic instead. The current financial hausse in light of all the people complaining about how bad the economy got hit by the pandemic is nothing else but perverse in that respect. -
What would the named pipe achieve? How do you intend to get the image into the named pipe in the first place? If you need to write C code to access the image from the SDK and put it into a named pipe in order to receive it in LabVIEW through the Pipe VIs, you could just as well write it to disk instead and read it through the LabVIEW File IO functions. Functionally it is not different at all, while the File IO is million times proven to work and the Pipe interface has a lot more potential problems. If your SDK driver has a way to simply put everything automatically into a pipe then it may be worth exploring but otherwise stay far away from this. Any effort in writing C code anywhere is better spend in creating a wrapper shared library that massages the driver API into something more suitable to be called through the LabVIEW Call Library Node.
-
Most likely a cerebral shortcut with his more involved SQLLite library he has worked on so relentlessly over the years. 😀 There is a good chance that the Windows version of the PostgreSQL library is compiled in a way that supports multithreading calls while the Linux version may not. With the CLN set to UI thread, LabVIEW will always switch to the UI thread before invoking the library call, resulting that all such CLNs are always called in the same thread. With "Call in any thread" enabled, LabVIEW will call the shared library function in whatever thread the VI is executing itself. A VI is assigned a so called execution system and each executation system is assigned at startup of LabVIEW a number of threads (typically 8 but this does depend on the number of cores your CPU has and can also be reconfigured in your LabVIEW ini file). Each VI is typically executed in the thread of its caller from inside that thread collection for the execution system unless LabVIEW decides that parallel VIs can be executed in parallel in which case one of the VIs will be called in one of the other threads. LabVIEW does try to avoid unnecessary thread switches since the context switching between threads does incur some overhead. In fact there are at least two potential problems from calling a library with multithreading: 1) If the library uses Thread Local Storage (TLS) to store things between calls, then calls occuring in different threads will access different information and that may cause problems. This should be not the problem here as each connection has its own "handle" and all the things across calls that are important should be stored in that handle and never in TLS. 2) If the library is not protected against simultaneous access to the contents inside the handle (or worse yet global variables in the library), things can go wrong as soon as your LabVIEW program has two different code paths that may run in parallel. It should not be a problem if your LabVIEW program guarantees sequential execution of functions through data flow dependency such as a consequently wired through handle wire and/or error cluster. The fact that the VIs seem to hang immediately on first call of any function, could indicate that it is a Thread Local Storage problem. The loading of the shared library is most likely occurring in the UI thread to make sure that it does run in the so called root loop of LabVIEW and if the library then initializes TLS for some of its stuff, things would go wrong in the "Call in any thread" case as soon as the first function is executed, as it tries to access non initialized TLS values there.
-
Data buffer handling and especially callback functions really requires you to write an intermediate shared library in C(++) to translate between LabVIEW and your driver API. Memory management in LabVIEW has some specific requirements that you need to satisfy and usually goes against what such drivers expect to use. They are written with C programming techniques in mind, which is to say the caller has to adhere to whatever the library developer choose to use. LabVIEW on the other hand is a managed environment with a very specific memory management that supports its dataflow paradigma. There is a clear impedance mismatch between the two that is usually best solved in a wrapper library in C that translates between the two. While it is possible to avoid the wrapper library here, it usually means that you have to both implement the proper C memory management as required by the driver and additionally deal with things that are normally taken care of by the C compiler automatically when you try to implement the translation on your LabVIEW diagram directly. It's possible but pretty painful and gets very nasty rather quickly and results in hard to maintain diagram code especially if you ever intend to support more than one target architecture. As far as callback functions goes, anything but a wrapper library written in C(++) that handles the callbacks and optionally translates them into something more LabVIEW friendly like user events, is simply inventing insanity over and over again.
-
The internal kernel APIs used by the OpenG PortIO kernel driver were discontinued by Microsoft in 64-bit Windows. By the time I looked into this Microsoft also had started to require signing for kernel drivers to be installable. I have no interest to buy a public certificate for something like 100$ a year just to be able to sign an open source driver. That extraction of the kernel driver from the DLL is in theory a neat idea but in practice will only work if the user executing that function has elevated mode. Otherwise kernel driver installation is impossible.
-
Well, the reverse engineering was quite some work as I had to view many different existing CDF files and deduce the functionality of them. But creating a new one for the sqllite library itself should be straight forward by taking the one for lvzip and changing the relevant strings in the CDF file. Only thing besides that is that you need to generate also your own GUID for your package as that is how Max identifies each package. But there are many online GUID generators that can do that for you, and the chance that that will conflict with any GUID already in use is pretty much non-existent. Depending if you also want to support sqllite support for Pharlap and VxWorks you would have to either leave those sections in the CDF file or remove them. You can also handle dependencies and depending on what libraries you need they may actually exist as packages from NI already. Adding such dependencies to your CDF file is basically adding the relevant GUID from that other CDF file and version number to your own CDF file. Works quite well but if NI doesn't have the package available already, things can get more complicated. You either have to create an additional CDF file for each such package or you could possibly add the additional precompiled modules to the initial CDF file. A CDF file can reference multiple files per target, so it's possible but can get dirty quickly.
-
The LabVIEW deployment of VIs to a target does not copy over shared libraries and other binary objects to the target system. (Well it did for Pharlap targets but that was not always desirable. Originally Pharlap's Win32 implementation was fairly close to the Win NT4 and Win2000 API and many DLLs compiled for Windows NT or 2000 would work without change on Pharlap too. But with newer Windows versions diverting more and more from the limited Win32 support provided by Pharlap ETS and unless you went to really great lengths, including NOT using newer Visual Studio versions to compile your DLL, you could forget that a DLL would work on Pharlap. In addition LabVIEW on Windows has no notion of Unix shared libraries and so they simply removed that automatic deployment of such binary files to a target, even if that file would be present in the source tree. The possible implications and special requirements of how to install VxWorks OUTs and Unix SOs on a target system were simply to involved and prone to problems.) So to get such files properly put on the target system you need a separate installer. NI uses for their software libraries the so called CDF (Component Description Format) (not sure if they developed that themselves or snatched it from somewhere 😀). For the LV ZIP Toolkit I "reverse engineered" the NI CDF file structure to create my own CDF file for my lvzlib shared library. When you place that and the according shared library/ies into a folder inside <ProgramFilesx86>\National Instruments\Shared\RT Images, NI Max will see this cdf file and offer to install the package to the remote target if the supported targets in the CDF file are compatible with the target for which you selected the Install menu. This works but is made extra complicated by the fact that a normal user has no write access to the Shared directory tree. So if you add these files to your VI package, you must require VIPM to be started as administrator. Extra complication is that VIPM does not offer the "Shared/RT Images" path as a target option, so you would need to do all kind of trickery with relative paths, to get it installed there. I instead used InnoSetup to create a setup program containing those files and configured to require administrator elevation on startup. I then start this setup program from the PostInstall step in the package. But NI has slated CDF to be discontinued sometimes in the near future and to require NIPM packages instead to be used.
-
How does VIPM set the VI Server >> TCP/IP checkbox?
Rolf Kalbermatter replied to prettypwnie's topic in LabVIEW General
The fact that VIPM does require a restart of LabVIEW is probably a good indication as to what it does. 1) enumerate all installed LabVIEW versions 2) read the labview.ini in that location and extract the VI server settings 3) if user changes the settings for a LabVIEW version, update the LabVIEW.ini file and require a restart On restart LabVIEW reads the LabVIEW.ini file and all is according to what VIPM wanted it. Simple and no hidden vodoo magic at all.