-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Load lvlbp from different locations on disk
Rolf Kalbermatter replied to pawhan11's topic in Development Environment (IDE)
Doesn't need to. The LabVIEW project is only one of several places which stores the location of the PPL. Each VI using a function from a PPL stores its entire path too and will then see a conflict when the VI is loaded inside a project, while the project has this same PPL name in another location present. There is no other trivial way to fix that, than to go through the resolve conflict dialog and confirm for each conflict from where the VI should be loaded from now on. Old LabVIEW versions (way before PPLs even existed) did not do such path restrictive loading and if a VI with the wanted name already was loaded, did happily relink to that VI, which could get you easily into very nasty cross linking issues, with little or no indication that this had happened. The result was often a completely messed up application if you accidentally confirmed the save dialog when you closed the VI. The solution was to only link to a subVI if it was found at the same location that it was when that VI was saved. With PPLs this got more complicated and they choose to select the most restrictive modus for relinking, in order to prevent inadvertently cross linking your VI libraries. The alternative would be that if you have two libraries with the same name on different locations you could end up with loading some VIs from one of them and others from the other library, creating potentially a total mess. -
How do you debug your RT code running on Linux targets ?
Rolf Kalbermatter replied to Zyl's topic in Real-Time
Unfortunately, 27 kudos is very little! Many of the ideas that got implemented had at least 400 and even that doesn't guarantee at all that something gets implemented.- 17 replies
-
How do you debug your RT code running on Linux targets ?
Rolf Kalbermatter replied to Zyl's topic in Real-Time
That's of course another possibility but the NI Syslog Library works well enough for us. It doesn't plug directly into the Linux syslog but that is not a big problem in our case. It depends. In a production environment it can be pretty handy to have a life view of all the log messages, especially if you end up having multiple cRIOs all over the place which interact with each other. But it is always a tricky decision between logging as much as possible and then not seeing the needle in the haystack or to limit logging and possibly miss the most important event that shows where things go wrong. With a life viewer you get a quick overview but if you log a lot it will be usually not very useful and you need to look at the saved log file anyhow afterwards to analyse the whole operation. Generally, once debugging is done and the debug message generation has been disabled, a life viewer is very handy to get an overall overview of the system, where only very important system messages and errors get logged anymore.- 17 replies
-
How do you debug your RT code running on Linux targets ?
Rolf Kalbermatter replied to Zyl's topic in Real-Time
Well as far as the Syslog functionality itself is concerned, we simply make use of the NI System Engineering provided library that you can download through VIPM. It is a pure LabVIEW VI library using the UDP functions and that should work on all systems. As to having a system console on Linux there are many ways for that which Linux comes actually with, so I'm not sure why it couldn't be done. The problem under Linux is not that there are none, but rather that there are so many different solutions that NI maybe decided to not use any specific one, as Unix users can be pretty particular what they want to use and easily find everything else simply useless.- 17 replies
-
- 2
-
How do you debug your RT code running on Linux targets ?
Rolf Kalbermatter replied to Zyl's topic in Real-Time
We don't use Veristand, but we definitely use syslog in our RT applications quite extensively. In fact we use a small Logger class library that implements either file or syslog logging. I'm not sure what you would consider a pain to have such a solution working in VeriStand though. Somewhere during your initialization you configure and enable the syslog (or filelog) and then you simply have a Logger VI that you can drop in anywhere you want. Ours is a polymorphic VI with one version acting as a replacement for the General Error Handler.vi and the other being for simply reporting random messages to the logging engine. After that you can use any of the various syslog viewer applications to have a life update of the messages on your development computer or anywhere else on the local network.- 17 replies
-
- 1
-
View Executable on Web browser
Rolf Kalbermatter replied to Cat's topic in Remote Control, Monitoring and the Internet
That sounds a bit optimistic considering that all major web browsers nowadays disable Flash by default and some have definite plans to remove it altogether. Similar about the Silverlight plugin, which Microsoft has stopped to develop years ago already and support is marginal today (security fixes). -
That is not entirely true, depending on your more or less strict definition of a garbage collector. You are correct that LabVIEW does allocate and deallocate memory blocks explicitly, rather than just depending on a garbage collector to scan all the memory objects periodically and determine what can be deallocated. However LabVIEW does some sort of memory retention on the diagram where blocks are not automatically deallocated whenever they are going out of scope, because they can be then simply reused on the next iteration of loops or for the next run of the VI. And there is also some sort of low level memory management where LabVIEW doesn't usually return memory to the system heap whenever it is released inside LabVIEW but instead holds onto it for future memory requests. However this part has been changed several times in the history of LabVIEW, with early versions having a very elaborate memory manager scheme built in, at some point even using a third party memory manager called Great Circle, in order to improve on the rather simplistic memory management scheme of Windows 3.1 (and MacOS Classic) and also to allow much more fine grained debugging options for memory usage. More recent versions of LabVIEW have shed much of these layers and rely much more on the memory management capabilities of the underlying host platform. For good reasons! Creating a good, performant and most importantly flawless memory manager is an entire art in itself.
-
A Rolf Kalbermatter Article - External Code in LabVIEW
Rolf Kalbermatter replied to Tomi Maila's topic in Announcements
I have recently resurrected these articles under https://blog.kalbermatter.nl -
Communicate with Omron E5CC using Modbus
Rolf Kalbermatter replied to Nathan_MerlinIC's topic in Hardware
That's the status return value of the viRead() function and is meant as a warning "The number of bytes transferred is equal to the requested input count. More data might be available.". And as you can see, viRead() is called for the session COM12 and with a request for 0 bytes, so something is not quite setup right, since a read for 0 bytes is pretty much a "no operation". -
Then he would drown
-
DLL with Bundle input crashes
Rolf Kalbermatter replied to Alexander Kocian's topic in Calling External Code
Something about the __int64 sounds very wrong! In fact the definition of the structure should really be like this with the #pragma pack() statements replaced with the correct LabVIEW header files. #include "extcode.h" // Some stuff #include "lv_prolog.h" typedef struct { int32 dimSize; double elt[1]; } TD2; typedef TD2 **TD2Hdl; typedef struct { TD2Hdl elt[1]; } TD1; #include "lv_epilog.h" // Remaining code This is because on 32-bit LabVIEW for Windows, structures are packed, but on 64-bit LabVIEW for Windows, they are not. The "lv_prolog.h" file sets the correct packing instruction depending on the platform as defined in "platdefines.h" which is included inside "extcode.h". The __int64 only seems to solve the problem, but by accident. It works by the virtue of LabVIEW only using the lower 32 bits of that number anyway and the fact that x86 CPUs are little endian, so the lower 32-bit of the int64 also happen to be in the same location as the full 32-bit value LabVIEW really expects. But it will go wrong catastrophically if you ever try to compile this code for 32-bit LabVIEW. And if you call any of the LabVIEW manager function defined in "extcode.h" such as the NumericArrayResize() you will also need to link your project with labview.lib (or labviewv.lib for the 32-bit case) inside the cintools directory. As long as you only use datatypes and macros from "extcode.h", this doesn't apply though. -
DLL with Bundle input crashes
Rolf Kalbermatter replied to Alexander Kocian's topic in Calling External Code
#pragma pack(push,1) typedef struct { int dimSize; double elt[1]; } TD2; typedef TD2 **TD2Hdl; typedef struct { TD2Hdl elt1; } TD1; #pragma pack(pop) extern "C" __declspec(dllexport) void pointertest(TD1 *arg1); MgErr pointertest(TD1 *arg1) { if (!arg1->elt1 || (*arg1->elt1)->dimSize < 2) return mgArgErr; (*arg1->elt1)->elt[0] = 3.1; (*arg1->elt1)->elt[1] = 4.2; } Defensive programming would use at least this extra code. Note the extra test that the handle is not NULL before testing the dimSize, since the array handle itself can be legitimately NULL, if you happen to assign an empty array to it on the diagram Altneratively you should really make sure to properly resize the array with LabVIEW manager functions before attempting to write into them, just as ned mentioned: MgErr pointertest(TD1 *arg1) { MgErr err = NumericArrayResize(fD, 1, (UHandle*)&arg1->elt1, 2); if (err == noErr) { (*arg1->elt1)->elt[0] = 3.1; (*arg1->elt1)->elt[1] = 4.2; } return err; } -
I'm afraid your conclusion is very true, especially if you only plan to build this one system. It would be probably a different situation if you had to build a few dozen, but that is not how this usually works.
-
The IMAQ datatype is a special thing. It is in fact a refnum that "refers" to a memory location that holds the entire image information, and that is not just the pixeldata itself but also additional information such as ROI, scaling, calibration, etc. Just as passing a file refnum to a file function does not pass a copy of the file to the function to operate on, so does passing an IMAQ refnum not create a copy of the data. It at most creates a copy of the refnum (and increments an internal refcount in the actual image data structure). The IMAQ control does the same. It increases the refcount so the image stays in memory, and decreases the refcount for the previous image when another IMAQ refnum is written into the control. And there is a good reason that NI decided to use a refnum type for images. If it would operate on them by value just as with other wire data, you would be pretty hard pressured to process even moderately sized images on a normal computer. And it would get terribly slow too, if at every wire branching, LabIEW would start to create a new by value image and copy all the potentially 100MB and more data from the original image into that copy. And if you wire a true constant to the destroy all? input on the IMAQ Destroy function this simply tells IMAQ to actually destroy any and every image that is currently allocated by IMAQ. And if you do that you can in fact save yourself the trouble of calling this function in a loop multiple times to destroy each IMAQ refnum individually. But yes, it will basically destroy any and every IMAQ refnum currently in memory, so there is no surprise that your IMAQ control suddenly turns blank as the image it displays is yanked out of memory under its feet. And why would they have added this option to the IMAQ Destroy? Well, it's pretty usual to create temporary images during image analysis functions and give them a specific name. If they don't exist they will be created and once they are in memory they will be looked up by their name and reused. So you don't typically want to destroy them after every image analysis round but just let them hang around in memory to be reused in the next execution of the analysis routine. But then to properly destroy them at the end of the application, you would have to store them in some queue or buffer somewhere to refer to them just before exiting and pass that refnum explicitly to the IMAQ Destroy function. Instead you can simply call IMAQ Destroy with that boolean set to true, to destroy any IMAQ refnums that were left lingering around.
-
There is a reason the NI interfaces are so expensive. You need to be a member of the Profibus International group to receive all the necessary information and be allowed to sell products which claim to be Profibus compatible. And that costs a yearly fee. While the hardware is indeed based on an RS-485 physical layer there are specific provisions in the master hardware that must guarantee certain things like proper failure handling and correct protocol timing. There have been two Open Source projects that tried to implement a Profibus master implementation. One is the pbmaster project which seems to have completely disappeared from the net and was a Linux based driver library to run with cheap RS-232 to 485 converter interfaces or specific serial controller interface chips. I suppose with enough effort there is a chance that one might be able to get this to work on a NI Linux based cRIO, but it won't be trivial. The main part of this project was a kernel device driver with a hardware specific component that did directly interface to the serial port chip. To get this to interface to a normal RS-485 interface on the cRIO (either as a C module or through the built in RS-485 interface that some higher end cRIOs have, would require some tinkering with the C sources for sure. The other project is ProfiM on sourceforge which seems to have been more or less abandoned since 2004 with the exception of an update in 2009 which added a win2k/xp device driver. This project is however very Windows specific and there is no chance to adapt this to a cRIO without more or less a complete rewrite of the software. Unfortunately this is about as far as it seems to go for cheap Profibus support. While the binary protocol for the Profibus is actually documented and you can download the specs for it, or study the source code of these two projects to get an idea, the Profibus protocol timing is critical enough that it will be difficult to simulate with a purely user space based implementation such as using VISA to interface to a standard interface. Certain aspects of the protocol almost certainly need to be implemented in the kernel space to work reliably enough, or another alternative would be to implement the Profibus protocol on the FPGA in the cRIO, but that is also a major development effort.
-
LabVIEW creates a fixed set of GDI objects on start and then as needed when it draws something on the screen, and also offscreen when you work with Picture control or print something. In my work with LabVIEW I haven't really seen LabVIEW itself leaking GDI objects for quite a few years. However if you interface to external components such as ActiveX, .Net or DLL functions, that of course does not mean anything. They can create and not properly deallocate GDI objects as much as they like. DETT only can look into LabVIEW resources itself, not into resources allocated by those external components. The way to go after this is to get an idea of the rate of GDI object increase and trying to relate that to certain operations in your application. Then starting to selectively disable code parts until the object count doesn't increase steadily anymore. From there concur and divide by disabling smaller and smaller parts of code until you get a pretty good idea about the location.
-
Industrial EtherNet (EtherNet/IP)
Rolf Kalbermatter replied to siva's topic in Remote Control, Monitoring and the Internet
You should be more specific. Various people have attached code to their postings. And the initial library from siva, while the links on lavag.org in his earlier mails got trashed by the two lava crashes that the site had in its 15 or so years of operation, has been posted to github as he wrote in this post. You just need to advance to the second page of this thread and read it in its entirety. -
That sounds like a pretty lame excuse. The FPGA has very little to do with the fact that DAQmx wouldn't be portable to cRIO and in fact it is available on various cRIO systems nowadays. Please note that the LabVIEW version that KB article refers to is for version 7.1 and shows a DAQmx version 9.8 dialog while the current DAQmx version is 16.0. The problem is that the USB-DAQ systems are not supported on DAQmx for cRIO systems. The reason for that are probably manifold but the fact that every type of subdriver in DAQmx is a considerable effort and that cRIO systems have alternative DAQ options already plays most likely an important role. Trying to get this working yourself by communicating on USB Raw level is an exercise in vain. First you would need to get the actual USB protocol description for the USB-6366. With the exception of a few very simple low cost devices, NI never has published protocol specs for those devices. There was a tutorial like article in the past, that explained the creation of an USB Raw interface driver in LabVIEW with one of the USB-900x devices, but I can't find that right now. However those are low speed and very simple devices that likely do not use any features like USB interrupt pipes or any other modes than bulk transfers. With the USB-6366 this is very likely different, as you can't support continuous and reliable multi-megasample per second transfers through a simple bulk transfer pipe. You typically have to use (multiple) isochronous endpoints for that and likely some interrupt pipe endpoints too, for the signaling and protocol handshake. This document points out that you would not need to do the inf driver wizard magic for non-Windows targets. On Mac it just works if the device is not claimed by a driver already and on Linux you have to make sure that it gets mounted as an usbfs device. This should also apply for the Linux RT cRIO targets. If the cRIO are however one of the older VxWorks or even Pharlap ETS based devices you can anyhow forget about it immediately. They don't support USB Raw communication at all! The only way to get a custom USB device working on them is to actually write a custom USB kernel driver for those systems, which requires the according development system for Pharlap ETS or VxWorks, which is a major investment on its own, not even accounting for the trouble of getting acquainted with development of kernel device drivers on those highly specialized OSes. But inf driver wizard or not, the real work only starts after that. You have to use VISA functions to write the correctly formatted data packets to the different communication endpoints in the device and receive the answers from it. This is very tedious low level work for any non-trivial USB device, even if you happen to have a complete bit for bit protocol description for it. It is a sure way for insanity, to try to do without such a protocol description. These protocols are usually not just some text commands that you send to the device like with traditional GPIB, or RS-232 devices. The exception to that are USBTMC devices which implement a higher level service that allows to send SCPI and IEEE-488.2 compatible string commands. But for USBTMC devices you don't need to do anything special in terms of VISA communication. You simply address them with the USB::INSTR resource name instead of USB::RAW, and then communicate with them like any other SCPI/IEEE488.2 like device. But there is no reason for NI to implement USBTMC for their DAQ devices and consequently they haven't done so. This saves a very complex command interpreter in the device and therefore makes it possible to use a smaller and cheaper embedded processor in the device. Also by using a fully binary protocol, the USB bandwidth is better utilized.
-
TCP write / read problem, disable write buffer ?
Rolf Kalbermatter replied to Zyl's topic in LabVIEW General
That still won't work as intended by the OP. As long as the receiver socket has free buffer it will accept and acknowledge packets, making the sender socket never timeout on a write! This is not UDP, where a message datagram is considered a unique object, that will be delivered to the receiver as single unit, even if he requests a larger buffer, and even if there are in fact more message datagrams in the socket buffer that could fit into the requested buffer. TCP/IP is a stream protocol. No matter how many small datapackets you send (not talking about Nagle for a moment) as long as the receiver socket has buffer space available, it will copy it into that buffer, appending to any already waiting data in the buffer, and the receiver can then read it in one single go at once, or in any sized parts it desires. So if the receiver has a 4k buffer, it will cache about 53 packets of 76 bytes each from the sender before sending a NAK to the sender socket for any more packets. Only then will the write start to timeout on the sender side, after having filled its own outgoing socket buffer too. And then you need to read those 53 packets at the client before you get the first fairly recent package. Sounds to me not like a very reliable throttling mechanisme at all! Of course you could make the sender close the connection, once it sees a TCP Write timeout error, which will eventually give a connection aborted by peer error on the receiver side, but assuming the 4k receive buffer example above and a 100ms interval for sending packets, it will take more than 5s for the sender to see that the receiver is not reading the messages anymore and being able to abort. If the receiver starts to read more data in that time, it will still see old data and having to read them all until the TCP Read function times out, to be sure to have the latest value. And that assumes a 4k buffer. Typical socket implementation nowadays use 64 k buffers and more. Modern Windows versions actually use an adaptive buffer size, meaning it will increase the buffer beyond the configured default value as needed for fast data transfer. This should not likely come into play here as sending 76 byte chunks of data every few ms is not considered fast data at all, but it shows you that the receive buffer size for a socket is on many modern systems more like a recommendation than a clear limit. -
Any difference between application 64 bits vs 32 bits?
Rolf Kalbermatter replied to ASalcedo's topic in LabVIEW General
The quick answer is: It depends! And any more elaborate answer boils down to the same conclusion! Basically the single most advantage of a 64 bit executable is if your program uses lots of memory. With modern computers having more than 4GB of memory, it is unlikely that your application is trashing the swap file substantially even if you get towards the 2GB memory limit for 32 bit applications. So I would not expect any noticeable performance improvement either. But it may allow you to process larger images that are impossible to work with in 32 bit. Other than that there are very few substantial differences. Definitely in terms of performance you should not expect a significant change at all. Some CPU instructions are quicker in 64 bit mode since it can process 64 bits in one single go, while in 32 bit mode this would require 2 CPU cycles. But that advantage is usually made insignificant by the fact that all addresses are also 64 bit big and therefore a single address load instruction moves double the amount of data and therefore the caches are also filled double as fast. This of course might not apply to specially optimized 64 bit code sections for a particular algorithm, but your typical LabVIEW application does not consist of specially crafted algorithms to make optimal use of 64 bit mode, but instead is a huge collection of pretty standard routines that simply do their thing and will basically operate exactly the same in both 32 bit and 64 bit mode. If your application is sluggish, this is likely because of either hardware that is simply not able to perform the required operations well within the time you would wish, or maybe more likely some programming errors, like un-throttled loops, extensive and unnecessary disk IO, frequent rebuilding of indices or selection list, building of large arrays by appending a new element every time, or synchronization issues. So far just about every application I have been looking at because of performance troubles, did one or more of the aforementioned things, with maybe one single exception where it simply was meant to process really huge amounts of data like images. Trying to solve such problems by throwing better hardware at it is a non-optimal solution, but changing to 64 bit to solve them is a completely wasteful exercise.- 8 replies
-
- application
- 64 bits
-
(and 1 more)
Tagged with:
-
TCP write / read problem, disable write buffer ?
Rolf Kalbermatter replied to Zyl's topic in LabVIEW General
You're definitely trying to abuse a feature of the TCP communication here in order to fit square pegs into a round hole. Your requirements make little sense. 1) You don't care about loosing data from the sender (not sending it is also loosing it) but you insist on using a reliable transport protocol (TCP/IP). 2) The client should control what the server does, but it does not do so by explicitly telling the server, but instead you rely on the buffer full message at the client side to propagate back to the server, hoping that that will work. For 1), the use of UDP is definitely useful. For 2), the buffering in TCP/IP is not meant nor reliable for this purpose. The buffering in TCP/IP is designed to never allow for the possibility that data gets lost on the way without generating an error on at least one side of the connection. It's design is in fact pretty much orthogonal to your requirement to use it as a throttling mechanisme. While you could set the buffer size to sort of make it behave the way you want, by only allowing a buffer for one message on both the client and server side, this is a pretty bad idea in general. First, you still would have to send at least two buffers, with one being stored on the client socket driver and the other in the server socket driver. Only allocating half the message as buffer size to only have one full message stored, would likely not work at all and generally generate errors all the time. But it gets worse: any particular socket implementation is not required to honor your request exactly. What it is required to do is to guarantee that a message up to the buffer size can not get corrupted or spuriously lost due to some buffer overflow, but it is absolutely free to reserve a bigger buffer than you specify, for performance reasons for instance, or by always reserving a buffer with a size that is a power of 2 bytes long. Also it requires your client to know in advance what the message length is, limits your protocol to only work in the intended way when every transmission is exactly this size, and believe me, at some time in the future you will go and change that message length on the server side and forget on the client side to make the according correction. Sit down and think about your intended implementation. It may seem that it would involve more work to implement an explicit client to server message that can tell the server to start sending periodic updates or stop them, (a single command with the interval as parameter would be already enough, an interval of -1 could then mean to stop sending data), but this is a much more reliable and future safe implementation than what you describe. Jumping through hoops in order to fit square pegs into round holes is never a solution. -
The multiple icons in a single icon resource are only meant for different resolutions, but really all represent the same icon. If the Windows explorer needs to display icons it retrieves the icon resource and looks for the resolution (eg. 32 * 32 pixels, or 16 * 16 for a small icon and if it can't find it, it retrieves the one closest to that resolution and rescales it, which often looks suboptimal. In order to have multiple icons in an executable you have to add multiple icon resources into the executable, each with its own resource identifier (the number you have to put behind the comma in the registry). The application builder does not provide for a means to do that, but there are many resource editors out there, both as part of development systems such as Visual Studio or LabWindows CVI as well as standalone versions. If you look for standalone versions beware however, many download sites for such tools nowadays are less than honest and either pack lots of adware into the download or outright badware that you definitely do not want to have on your computer.
- 4 replies
-
- exe
- executable
-
(and 1 more)
Tagged with:
-
It's simple: How would you want to implement a multi selection case structure when using strings, that should select between "a".. "f" and "f" .. "z"? One of the two ends has to be non-inclusive if you want to allow "flying" to match the selection too. It would be unpractical to let the string selection only work if the incoming string matches exactly (eg. "f1" would not match anything in above sample!.
-
Well, as already mentioned it is hard to say anything specific here from just watching that spastic movie. I haven't seen spontaneous execution highlighting myself, but your mentioning that shutting down the application can take very long and usually crashes, would support the possibility that you have Call Library Nodes in your application that are not correctly configured and when they get called, they consistently trash your memory in a certain way. Buffer overflows are the most common problem happening here, where you do not provide (large enough) buffers to the Call Library Node parameters for the shared library who wants to write information in there. This will result in corrupted memory and the possible outcome can be anything from an immediate crash to a delayed crash at a later seemingly unrelated point in time, including when you shutdown LabVIEW, and then while it tries to clean up the memory, it stumbles over trashed pointers and data objects. It also could sneakily overwrite memory that is used in calculations in your application and in that way produce slightly to wildly different results, than what you expect, or as in this case write over the memory that controls the execution highlighting. So check your application for VIs containing Call Library Nodes (and while the NI drivers do use quite a lot of Call Library Nodes, you should in a first scan disregard them, they are generally very well debugged and and many million times tried, so it is unlikely that something is wrong in that part unless you got a corrupted installation somehow. Then when you located the parts in your application that might be the culprit, start disabling sections of your code using the conditional disable structure until you don't see any strange happenings including no crash or similar thing during the exit of LabVIEW.
-
This is basically asking the wrong question in the wrong way . The LabVIEW diagram is always drawn as a vector graphic, but the icons are bitmaps. But yes the coordinates of the diagram are pixels, not some arbitrary high resolution unit like mixels (micro-meter resolution or whatever). Changing that in current LabVIEW would be a major investment that is not going to happen.