-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Automation Testing Tools for LabVIEW
Rolf Kalbermatter replied to eberaud's topic in LabVIEW General
LabVIEW controls are not Windows widgets but fully created and maintained by LabVIEW itself. As such you can not locate LabVIEW controls with tools like AutoIT that assume that a user interface is based on controls that are implemented as Windows child windows. I'm not familiar with UIA, but the claim that it can identify and control LabVIEW controls does sound a bit strong to me. From what I know, LabVIEW controls are fully owner drawn and implemented using a LabVIEW internal object oriented system originally even implemented in standard C, but since almost certainly ported to a C++ system. As far as any external system is concerned a LabVIEW front panel does simply contain lines, texts and maybe some alpha shading but no controls whatsoever. The only entity that can access this object hierarchy for external applications is the VI server interface, but that is highly LabVIEW specific. So I would suppose one could develop an AutoIT plugin that goes over VI Server to control the UI of a LabVIEW VI. -
Open source alternatives to TestStand?
Rolf Kalbermatter replied to pawhan11's topic in LabVIEW General
Back in those days I had my own Test Executive Framework modeled around how the NI Test Executive was designed. It worked pretty well but was quite a burden to maintain. That was in LabVIEW 5.1 and 6.x days and VI server was just about invented and very limited, so quite painful to implement a system like a Test Executive that should be easily scriptable. We later moved to a system based on Lua for LabVIEW for some of these applications, where Lua was the scripting environment that the test engineer would use to customize the entire product tests. It worked fairly well but had its own challenges as many things were either integrated in the LabVIEW application and only modifiable by us but not the client, or we had to do the interfacing in Lua too and that would usually make the code so complex that the client couldn't change it either. It's a very tricky thing to get right. In fact I think there is not a single solution that fits all. TestStand went into a certain direction that works fairly well for semiconductor testing and characterization but not as ideal for some other test setups. And it is a system of its own that is fairly complex and needs to be understood well in addition to whatever extra environment you use to implement the actual test connectors (LabVIEW, LabWindows/CVI, Visual Studio VB,C/C++, or Python). Also it is quite database driven so you need to have some ideas about that too or you end up in a mess with it quickly. I never quite felt at home with TestStand, despite having it used for several customer projects, mainly in the semiconductor industry. On the other hand, writing your own Test Executive is A LOT of work, both for developing it and even much more so for maintaining it. I would not really want to do that anymore unless there would be a very unique opportunity for a specific project, but chances for that are pretty much non-existent. And inside Averna we have our own product Averna Launch which is build around NI TestStand. So chances to get to develop something else besides that are pretty much moot. 😀 Averna Launch however takes the database control to an even higher level, so you really have to think about that in order to get the maximum benefit of the product. -
Well, I have a 4.3 alpha on github. But haven't been able to work on that for some time.
-
We also have iDeal here in the Netherlands! 😀
-
The unchange is in other solutions called mask. And it is in reality simply doing a read, with a boolean combined OR of the value and an AND of the mask and then writing it back.
-
The underlaying API does not support that and it is quite logical. I2C and SPI are not really designed as bit protocols but as byte protocols. They address typical multiples of 8 or 16 bit per data transfer. The method to do what you want to do involves reading back the full 8 bit port, modify the bit in question and send back everything.
-
Well you installed the 32-bit version of the the DLL (syswow64 is despite its name NOT for 64-bit DLLs but the 32-bit versions of DLLs on a 64-bit Windows system). Same accounts possibly for the MPSEE.DLL. They all need to be 64-bit in order to be loadable in 64-bit LabVIEW! Best is probably to download the actual MPSEE driver directly from the FTDI site here and use the 64-bit version of the DLL inside under: libMPSSE.zip\libMPSSE__0.6\lib\windows\visualstudio\x64\Release\libMPSEE.dll
-
The MPSSE.DLL is only a high level DLL driver that depends on the low level D2XX driver from FTDI. You need to have that installed too, and it needs to be present in a location where the Windows DLL Loader can find it. If you download and install the standard driver from the FTDI site, this should be taken care for you. https://ftdichip.com/drivers/d2xx-drivers/
-
New VIs should simply inherit the default setting you made in your Tools->Options->Environment->General unless you create them from inside a project, in which case they inherit the setting as made in the project properties. The bug mentioned by Yair may be related to the fact that the restoring of auto-saved VIs is not happening in the project context but in the global LabVIEW context and therefore uses the global settings from the Tools menu.
-
The wrapping may be done anyways even if it is a clean pass through of all parameters. Simply because this was how the VIs always worked and it was actually easier. The lvanlys.dll had to be modified anyways, so just leave the original exports and redirect them wherever necessary to MKL, with or without any parameter massaging. This makes the tedious work of going into every single LabVIEW VI to edit the Call Library Node superfluous. And yes I have experience with wrapping DLLs from LabVIEW and can assure you that the last thing you want to do when changing something is to make a change that will require you to go into every single VI and make some more or less minor changes. Aside from being a mind numbing job, it is also VERY VERY easy to make stupid mistakes in such changes by forgetting a certain change in some of the VIs. And then you have to open each and every VI again to make sure that you did really change everything correctly, and to be safe, do that again and again. Tedious, painful and utterly unnecessary. Instead just leave the VIs alone, change the underlying DLL in whatever way you need and you are done. There is still a lot of testing after that, but at least one potential source of errors less.
-
You might be right but please note that those LabVIEW VIs more or less all call "internal" functions in lvanlys.dll. But in reality all they do is massage the parameters from a LabVIEW friendly format into an Intel MKL C(++) API format and then call the actual MKL library. So the fact that a VI calls LV_something in that DLL means absolutely nothing in terms of if it is ultimately executed inside lvanlys.dll or actually simply forwarded to MKL to do the heavy number crunching part. It could be implemented fully in lvanlys.dll, because the MKL doesn't provide this function or not in the way the old NI library did, so for compatibility reason they maintained the old code but in most cases it is simply a forward to the MKL with minor parameter datatype translations. Even if there is some real implementation part in lvanlys.dll for a function, it still will very often ultimately call the MKL for lower level functions so may still depend on a corrected MKL to fix a bug.
-
Ethercat for LabView2020 on Linux
Rolf Kalbermatter replied to Yaw Mensah's topic in LabVIEW General
No! We used it on a IC 3173 NI Linux RT controller. It should supposedly work on all NI Linux RT hardware. -
I'm not exactly privy to the details but most likely NI doesn't even build the MKL themselves. They simply take the binaries as released from Intel and package them with their LabVIEW wrapper and be done with it. And there are a number of issues with this that way: 1) NI can indeed not patch that library themselves anymore but has to wait on Intel to make bug fixes. 2) And NI won't pick a new release everytime Intel decides to make some more or less relevant change to that library. Instead they will likely review the list of changes since the last pick they did and decide if it is worth the hassle to rebuild a new MKL + LabVIEW package. This is not a one hour process of just adding the new DLLs to the old package build but instead involves a lot of extra work in terms of making sure everything is correct and lots and lots of testing too. The moment for such a review is likely usually a few months before a new release of LabVIEW. IF Intel happens to make this one single important change one month after this, NI will most likely not pick it up until the next review moment a few months before the next full LabVIEW release and then you can easily see how it can take 2 years.
-
Making USB-8451 work with PXI running LabVIEW RT / Phar Lap ETS
Rolf Kalbermatter replied to codcoder's topic in Hardware
Almost certainly. This device does almost certainly not use a standard USB device class and that means you will have to program on NI-VISA USB Raw to emulate the commands that the native Windows driver sends and receives from the device using directly USB device driver calls. Possible? Yes but not easy and pretty for sure. If it wasn't such a pain to use VISA USB Raw on modern OSes, I would have long ago released a driver for the FTDI chips using pure VISA calls. -
Ethercat for LabView2020 on Linux
Rolf Kalbermatter replied to Yaw Mensah's topic in LabVIEW General
If you would have used a LabVIEW Realtime controller you could have used the NI Industrial Communication for EtherCAT driver. It does have a learning curve for sure, but I have used it in the past successfully. If you use other libraries you will have to make sure that it is 64-bit compiled in order to interface it through the LabVIEW Call Library Node. LabVIEW for Linux is since 2016 only available as 64 bit version. Both libraries you mention are GPL, so this can have very significant consequences for using it in a project that you can't or will not want to make open source itself. -
Actually not exactly. NI set this compile define to make the shared library multi-threading safe, trusting the library developers to have done everything correctly to get this work like it should but somehow it doesn't. Still there is something seriously odd. I could understand that things get nasty if you had other code running in the background also accessing this library at the time you do this test but if this is the only code accessing this shared library something is definitely odd. There is still only one call to the shared library at the time your PQfinish() executes so the actual protection from multiple threads accessing this library is really irrelevant. So how did you happen to configure the PGconn "handle"? Is this a pointer sized integer variable? You are executing on an IC-7173 which is a Linux x64 target, so these "handles" are 64-bit big on your target but 32-bit if you execute the code on LabVIEW for Windows 32-bit! I'm just throwing out ideas here, but the crash from just calling one single function of a library in a reentrant CLN really doesn't make to much sense. The only other thing that could be relevant is if this library would use thread-local storage, but that would be brain damaged considering that it uses "handles" for its connections and can therefore store everything relevant in there instead. And a warning anyways: While I doubt that you would find PG libraries that are not compiled as multithreading safe (this only really makes sense on targets that provide no proper threading support such as Windows threading or the Unix pthread system) there obviously is a chance that it could happen. You can choose to implement everything reentrant and on creation of a new connection, call the function that Shaun showed you. But what then? If that function returns false, all you can do is abort and return an error as you can not dynamically reconfigure the CLNs to use UI threading instead (well you can by using scripting but I doubt you want to do that on every connection establishment and scripting is also not available in a built application). So it does make your library potentially unusable if someone uses a binary shared library compiled to be not multithreading safe.
-
What do you mean with "threadsafe CLN"? It is a rather bogus terminology in this context. What you have is "reentrant" which requires the library to be multithreading safe and "UI Thread", which will allow the library to do all kind of non multithreading safe things. That trying to call PGisthreadsafe() from any context is not crashing is to be expected. This function simply accesses a readonly information that was created at compile time and put in the library. There is absolutely nothing that could potentially cause threading issues in that function. That every other function simply crashes even if you observe proper data flow dependency so that functions never can attempt to access the same information at the same time, would be utterly strange. That would not be just the reentrant setting causing multithreading unsafe issues but something much more serious and basic. I at least assume that you tried this also in single stepping highlighting mode? Does it still crash then?
-
Is it? Then there would be indeed a discrepancy between when the front panel update is executed and when the debug mechanism considers the data to be finally going through the wire. Which could be considered a bug strictly speaking, however one in the fringes of "who cares". I guess working 25+ years in LabVIEW, such minor issues have long ago ceased to even bother me. My motto with such things is usually "get over it and live with it, anything else is bound to give you ulcers and high blood pressure for nothing".
-
Reset Low level TCP connection on LV2018
Rolf Kalbermatter replied to Bobillier's topic in LabVIEW General
Ahhh I see, that one had however no string input at that point. But now it's important to know on which platform this executes!! I don't think this VI is a good method to use in implementing a protocol driver, given LabVIEWs multiplatform nature. The appended EOL will depend on the platform this code runs, while your device you are talking with most likely does not care if it is contacted by a program running on Windows, Mac or Linux but simply expects a specific EOL. Any other EOL is bound to cause difficulties! -
internal warning 0x occurred in MemoryManager.cpp
Rolf Kalbermatter replied to X___'s topic in LabVIEW General
You either have a considerably corrupted LabVIEW installation or are executing some external code (through Call Library Node or possibly even as ActiveX or .Net component) that is not behaving properly and corrupts memory! At least for the Call Library Node it could be also an incorrect configuration of the Call Library Node that causes this. Good luck hunting for this. The best way if you have no suspicion about possible corrupting candidates is to "divide and conquer". Start to exclude part of your application from executing and then run it for some time and see if those errors disappear. Once you found a code part that seems to be the culprit, go and disable smaller parts of that code and test again. It ain't easy but you have to start somewhere. Just because your program doesn't crash right away when executing (or the test VIs that come with such an external code library) does not provide any guarantee that it is fully correct and not corrupting memory anyhow. Not every corruption leads immediately to a crash. Some may just cause slight (or not so sligh) artefacts in measurement data, some may corrupt memory that is later accessed (sometimes as late as when you try to close your VIs or shutdown LabVIEW and LabVIEW dutifully wants to clean all resources, and then stumbles over corrupted pointers, and only when serious things like stack corruption happen do you usually get an immediate crash. -
You are trying to force your mental model onto LabVIEW data flow. But data flow does not mandate or promise any specific order of execution not strictly defined by data flow itself. A LabVIEW diagram typically always processes all input terminals (controls) and all constants on the top level diagram (outside any structure) first and then goes to the rest of the diagram. The last thing it does is processing all indicators on the top level diagram. There is no violation of any rule in doing so. Updating front panel controls before the entire VI is finished is only necessary if the according terminal is inside a structure. Clumping the update of all indicators on the top level diagram into one single action at the end of the VI execution does not delay the time the VI is finished but can save some performance. It also has to do with the old rule that it is better to put pass through input and output terminals on the top level diagram and not bury them somewhere inside structures, aside from other facts such as readability and the problem of output indicators potentially not being updated at all for subVIs, retaining some data from previous execution and passing that out.
-
Reset Low level TCP connection on LV2018
Rolf Kalbermatter replied to Bobillier's topic in LabVIEW General
What is that icon with the carriage return/line feed doing? Any local specific code in there? If your other side expects a carriage return, linefeed or both together specifically and just ignores other commands you could get similar behaviour. -
Something is surely off here: You say that the checksum is in the 7th byte and the count in the 8th. But aside that it is pretty stupid to add the count of the message at the end (very hard to parse a binary message if you don't know the length, but you only know that length if you read exactly the right amount of data), those 70 71, 73 and so on bytes definitely have nothing to do with the count of bytes in your message. Besides what checksum are you really dealing with? A typical CAN frame uses a 15 bit CRC checksum. This is what the SAE_J1850 fills in on its own and you can't even modify when using that chip. It would seem that what you are dealing with is a very device specific encoding. There could be a CRC in there of course, but that is totally independent of the normal CAN CRC. As far as the pure CAN protocol is concerned, your 8 bytes of the message are the pure data load, and there is some extra CAN framing around those 8 bytes, that you usually never will see on the application level. As such adding an 8 bit CRC to the data message is likely a misjudgement from the device manufacturer. It adds basically nothing to the already performed 15 bit CRC checking that the CAN protocol does itself one protocol layer lower.