Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. That still won't work as intended by the OP. As long as the receiver socket has free buffer it will accept and acknowledge packets, making the sender socket never timeout on a write! This is not UDP, where a message datagram is considered a unique object, that will be delivered to the receiver as single unit, even if he requests a larger buffer, and even if there are in fact more message datagrams in the socket buffer that could fit into the requested buffer. TCP/IP is a stream protocol. No matter how many small datapackets you send (not talking about Nagle for a moment) as long as the receiver socket has buffer space available, it will copy it into that buffer, appending to any already waiting data in the buffer, and the receiver can then read it in one single go at once, or in any sized parts it desires. So if the receiver has a 4k buffer, it will cache about 53 packets of 76 bytes each from the sender before sending a NAK to the sender socket for any more packets. Only then will the write start to timeout on the sender side, after having filled its own outgoing socket buffer too. And then you need to read those 53 packets at the client before you get the first fairly recent package. Sounds to me not like a very reliable throttling mechanisme at all! Of course you could make the sender close the connection, once it sees a TCP Write timeout error, which will eventually give a connection aborted by peer error on the receiver side, but assuming the 4k receive buffer example above and a 100ms interval for sending packets, it will take more than 5s for the sender to see that the receiver is not reading the messages anymore and being able to abort. If the receiver starts to read more data in that time, it will still see old data and having to read them all until the TCP Read function times out, to be sure to have the latest value. And that assumes a 4k buffer. Typical socket implementation nowadays use 64 k buffers and more. Modern Windows versions actually use an adaptive buffer size, meaning it will increase the buffer beyond the configured default value as needed for fast data transfer. This should not likely come into play here as sending 76 byte chunks of data every few ms is not considered fast data at all, but it shows you that the receive buffer size for a socket is on many modern systems more like a recommendation than a clear limit.
  2. The quick answer is: It depends! And any more elaborate answer boils down to the same conclusion! Basically the single most advantage of a 64 bit executable is if your program uses lots of memory. With modern computers having more than 4GB of memory, it is unlikely that your application is trashing the swap file substantially even if you get towards the 2GB memory limit for 32 bit applications. So I would not expect any noticeable performance improvement either. But it may allow you to process larger images that are impossible to work with in 32 bit. Other than that there are very few substantial differences. Definitely in terms of performance you should not expect a significant change at all. Some CPU instructions are quicker in 64 bit mode since it can process 64 bits in one single go, while in 32 bit mode this would require 2 CPU cycles. But that advantage is usually made insignificant by the fact that all addresses are also 64 bit big and therefore a single address load instruction moves double the amount of data and therefore the caches are also filled double as fast. This of course might not apply to specially optimized 64 bit code sections for a particular algorithm, but your typical LabVIEW application does not consist of specially crafted algorithms to make optimal use of 64 bit mode, but instead is a huge collection of pretty standard routines that simply do their thing and will basically operate exactly the same in both 32 bit and 64 bit mode. If your application is sluggish, this is likely because of either hardware that is simply not able to perform the required operations well within the time you would wish, or maybe more likely some programming errors, like un-throttled loops, extensive and unnecessary disk IO, frequent rebuilding of indices or selection list, building of large arrays by appending a new element every time, or synchronization issues. So far just about every application I have been looking at because of performance troubles, did one or more of the aforementioned things, with maybe one single exception where it simply was meant to process really huge amounts of data like images. Trying to solve such problems by throwing better hardware at it is a non-optimal solution, but changing to 64 bit to solve them is a completely wasteful exercise.
  3. You're definitely trying to abuse a feature of the TCP communication here in order to fit square pegs into a round hole. Your requirements make little sense. 1) You don't care about loosing data from the sender (not sending it is also loosing it) but you insist on using a reliable transport protocol (TCP/IP). 2) The client should control what the server does, but it does not do so by explicitly telling the server, but instead you rely on the buffer full message at the client side to propagate back to the server, hoping that that will work. For 1), the use of UDP is definitely useful. For 2), the buffering in TCP/IP is not meant nor reliable for this purpose. The buffering in TCP/IP is designed to never allow for the possibility that data gets lost on the way without generating an error on at least one side of the connection. It's design is in fact pretty much orthogonal to your requirement to use it as a throttling mechanisme. While you could set the buffer size to sort of make it behave the way you want, by only allowing a buffer for one message on both the client and server side, this is a pretty bad idea in general. First, you still would have to send at least two buffers, with one being stored on the client socket driver and the other in the server socket driver. Only allocating half the message as buffer size to only have one full message stored, would likely not work at all and generally generate errors all the time. But it gets worse: any particular socket implementation is not required to honor your request exactly. What it is required to do is to guarantee that a message up to the buffer size can not get corrupted or spuriously lost due to some buffer overflow, but it is absolutely free to reserve a bigger buffer than you specify, for performance reasons for instance, or by always reserving a buffer with a size that is a power of 2 bytes long. Also it requires your client to know in advance what the message length is, limits your protocol to only work in the intended way when every transmission is exactly this size, and believe me, at some time in the future you will go and change that message length on the server side and forget on the client side to make the according correction. Sit down and think about your intended implementation. It may seem that it would involve more work to implement an explicit client to server message that can tell the server to start sending periodic updates or stop them, (a single command with the interval as parameter would be already enough, an interval of -1 could then mean to stop sending data), but this is a much more reliable and future safe implementation than what you describe. Jumping through hoops in order to fit square pegs into round holes is never a solution.
  4. The multiple icons in a single icon resource are only meant for different resolutions, but really all represent the same icon. If the Windows explorer needs to display icons it retrieves the icon resource and looks for the resolution (eg. 32 * 32 pixels, or 16 * 16 for a small icon and if it can't find it, it retrieves the one closest to that resolution and rescales it, which often looks suboptimal. In order to have multiple icons in an executable you have to add multiple icon resources into the executable, each with its own resource identifier (the number you have to put behind the comma in the registry). The application builder does not provide for a means to do that, but there are many resource editors out there, both as part of development systems such as Visual Studio or LabWindows CVI as well as standalone versions. If you look for standalone versions beware however, many download sites for such tools nowadays are less than honest and either pack lots of adware into the download or outright badware that you definitely do not want to have on your computer.
  5. It's simple: How would you want to implement a multi selection case structure when using strings, that should select between "a".. "f" and "f" .. "z"? One of the two ends has to be non-inclusive if you want to allow "flying" to match the selection too. It would be unpractical to let the string selection only work if the incoming string matches exactly (eg. "f1" would not match anything in above sample!.
  6. Well, as already mentioned it is hard to say anything specific here from just watching that spastic movie. I haven't seen spontaneous execution highlighting myself, but your mentioning that shutting down the application can take very long and usually crashes, would support the possibility that you have Call Library Nodes in your application that are not correctly configured and when they get called, they consistently trash your memory in a certain way. Buffer overflows are the most common problem happening here, where you do not provide (large enough) buffers to the Call Library Node parameters for the shared library who wants to write information in there. This will result in corrupted memory and the possible outcome can be anything from an immediate crash to a delayed crash at a later seemingly unrelated point in time, including when you shutdown LabVIEW, and then while it tries to clean up the memory, it stumbles over trashed pointers and data objects. It also could sneakily overwrite memory that is used in calculations in your application and in that way produce slightly to wildly different results, than what you expect, or as in this case write over the memory that controls the execution highlighting. So check your application for VIs containing Call Library Nodes (and while the NI drivers do use quite a lot of Call Library Nodes, you should in a first scan disregard them, they are generally very well debugged and and many million times tried, so it is unlikely that something is wrong in that part unless you got a corrupted installation somehow. Then when you located the parts in your application that might be the culprit, start disabling sections of your code using the conditional disable structure until you don't see any strange happenings including no crash or similar thing during the exit of LabVIEW.
  7. This is basically asking the wrong question in the wrong way . The LabVIEW diagram is always drawn as a vector graphic, but the icons are bitmaps. But yes the coordinates of the diagram are pixels, not some arbitrary high resolution unit like mixels (micro-meter resolution or whatever). Changing that in current LabVIEW would be a major investment that is not going to happen.
  8. We recently came across this problem. Not really Report Generation Toolkit related but in our own library to interface to Excel. Microsoft seems to have changed the interface to the Save and SaveAs methods once again in Office 2016. LabVIEW as a statically compiled system implements the interface to ActiveX as a static dispatch interface at runtime. Only at compile time does it reevaluate the method interface to the actually installed type library on the current computer. This is a choice that works fine in most cases and is pretty fast performance wise but fails if someone changes the ActiveX interface of a component and you try to call that component from LabVIEW without wanting to bother about the actual version that is installed on the final target system. Microsoft however does not provide compatibility methods that support the old interface, since they feel that the ActiveX dynamic dispatch capability, where a caller can find out about and construct the necessary dispatch interface at runtime, makes that unnecessary. Unfortunately ActiveX is considered both by Microsoft and NI as a legacy technology so neither party has much interest to invest any time at all into this beyond keeping it working the same as until now. Basically there is no easy solution for this. You have to compile an app on a computer that uses the same office version as what the target computer will use. The only way around that is to actually create separate wrappers for the Save methods on two different computers with each their own version of MS office installed and then in your app determine the actual Office version that is installed and then invoking the correct VI dynamically. A possible but painful workaround. For older Office versions I believe NI already incorporated such a fix into the RGT Toolkit for the save method (and used a somewhat sneaky trick to avoid accidental recompilation of the relevant dynamic VIs during an application build, which would adapt them to whatever Office version is currently installed on the machine) but obviously this hasn't been updated for Office 2016 yet. But it's a maintenance nightmare for sure for them, but the alternative of implementing runtime dynamic dispatch to Active X methods would be a major investment with several possible problems for existing application in terms of performance, and that is very unlikely to happen, since ActiveX is already considered a legacy technology for about a decade.
  9. Except that from the languages I do know, only Java, C# and C++14 support a deprecated keyword. Yes you can do it with gcc with the __attribute((deprecated)) and with MSVC with the __declspec(deprecated) keyword too, but that is a compiler toolchain specific extension which is not portable. So for many languages it either ends up as a custom decorator of a specific library (Python or Lua can do that) or it's just a comment that nobody will read anyhow!
  10. It would actually help if you saved the VIs for previous. I haven't installed LabVIEW 2016 here. However as far as calling the function in the header file is concerned, something like this should definitely work, if you didn't mess up the configuration of the DLL build script (which I can't control for lack of LabVIEW 2016). #include "Password.h" #define BUF_LENGTH 100 char input[] = "some text"; char output[BUF_LENGTH]; MakePassword(inpupt, output, BUF_LENGTH); printf("This is the converted text: %s", output); I'm not sure about your C programming experience, but if you have little to none, the trick is most likely in providing a valid buffer for the output string and not expect the function to handle that automatically like you are used in LabVIEW!
  11. Reading a bit further on the technical mailing list it seems there was an initial clash of some sorts between a few people who were on the two opposite ends of wanting to get code into the kernel and wanting to maintain a clean kernel source code base. Both points are pretty understandable and both sides have sort of resolved to some name calling in the initial phase. Then they sat down together and actually started working through it in a pretty constructive manner. None of the latter seems to have been picked up by the mainstream slashdotted media, and the quick and often snarky comments of more or very often less knowledgeable people, concentrated mostly on that initial fallout. It's very understandable that the kernel maintainer didn't want to commit 100k lines of code into the kernel code just like that. Apparently the AMD guys didn't expect that to happen anyways and actually were more of proposing the code as it was as a first RFC style submission, after having worked a bit to long in the shadows on the huge code base. They didn't however make this clear enough when submitting the code. And the maintainer was a bit quick and short in his answer. In hindsight the way this was handled from both sides isn't necessarily optimal, but there is no way this code would have been committed if Linus himself would still control the kernel sources. Yes developing for the Linux kernel is a very painful process if you are used to other device driver development such as on Windows. There you develop against a rigidly defined (albeit also frequently changing) kernel device interface. In Windows 3.1 and Windows 95 days you absolutely had to write assembly code to be able to write a device driver (VxD), in Windows 98 and 2000 they introduced the WDM model which replaced both the VxD and NT driver model completely. With Windows Vista the WDF framework was introduced, which is supposed to take away many of the shortcomings that the WDM model had for the ever increasing complexity of interactions between hardware drivers, such as power saving operations, suspend, or also IO cancelation or pure user mode drivers. In the Linux kernel, a driver is normally an inherent part of the kernel sources. As such they are very tightly coupled with the specific kernel interfaces, which is not static at all but changes as needed and all the drivers in the kernel source then have to be modified to adhere to these changes. It's obviously a very different development model than what you see in closed source OSes but there is something to be said about not involving several layers of complex intermediate abstraction that are generally difficult to debug and even more difficult to keep in sync with modifications on both the upper and lower boundary of each layer. It's not a very good model in terms of scaling when adding many different device drivers, as a single change in the kernel interface will easily require to change every single device driver in the source tree too, but that's what the kernel developers decided in. The alternative is to not only strictly specify the interface between the kernel and user space, which the Linux kernel does, although often with a twist in that they seem to prefer to do it in a different way to BSD and other Unix variants, seemingly for the sake of being different, but also define a stringent and static device driver API inside the kernel that will never change except maybe between major kernel versions. And even-though that might seem like a good idea for device driver developers as it would allow closed source drivers that won't need to be recompiled with every kernel upgrade, it's also a model that requires enormous architectural work upfront, before any driver can be written, only to find out that at the time the interface has been defined and the necessary infrastructure has been developed, that it is already obsolete. Another factor that might play in here is that the only way a device driver is actually easy to maintain in such a development model, is by actually open sourcing it, which is of course one of the main motivations of GNU in general and the Linux kernel especially. Unfortunately this leaves users of hardware that the manufacturer doesn't want to document openly, such as most NI hardware too, pretty much in the cold when they want to use Linux. One can get angry at NI or the Linux kernel guys that they each maintain their position, but that doesn't help and in the end both sides have the right to deal in this as they wish and currently do. Linux is not going to have a static device driver interface and while that is a pita for anyone not wanting to donate their device driver source with lots of their own support and sweat into the kernel mainline, it's how the Linux world is going to work for as long as there are people who want to work on Linux. It seems that many open source developers favor this model also outside the rather confined albeit extensive kernel development, but they forget that if you write a library you are not operating in a closed environment such as the kernel sources, but in a world where others will actually want to interface to that library and then arbitrarily changing the contract of a library API, simply because it is convenient to do so, should not be something that is considered without the uttermost care
  12. First check that your host provider does allow external connections to the database server. Almost every webspace provider nowadays lets you choose to install mySQL, (usually as MariaDB now) in your webspace environment so you can implement webstores, blogs and what else on your hosted website. However most do not allow connections to that database from outside the virtual website environment for security reasons. Once you determined that such external connections are allowed you have to determine which type of database server is used. Besides mySQL (MariaDB), you can also get hosted database servers based on MS-SQL or possibly even Oracle, for some high throughput commercial services, and that will largely influence the possible selection of your interfacing strategy. The SQL Toolkit you so profoundly excluded would support almost all possible servers. Alternatives are LabSQL, which is based on the same ADO interface that the SQL Toolkit uses, or the ADO-Tool. Depending on the used server you might also get lucky with the mySQL native driver from Saphir.
  13. There are certainly problems with storing and retrieving fractional seconds from database timestamps and that depends on database, according database driver and such. We had many trouble with that on MS-SQL and Oracle in the past and the only thing that works reliably across various versions of databases is to use stored procedures that take either a custom number format or the fractional and second part as separate numbers and then using DB specific functions to combine the two values into a native timestamp. Both ODBC and ADO/DAO lack an unified standard for this that all database drivers would support and traditionally the timestamp was only supporting full second resolution in ODBC and accordingly ADO, as well as in most database servers. You can't really blame the database toolkit for this, since it is really a pretty thin wrapper around ADO, and can't make up for historical shortcomings of the underlying infrastructure. As to implementing a native T-SQL protocol through TCP/IP VIs, there is at least one library out there that is definitely not as extensive and well tested as the Saphir toolkit for MySQL but also workable. The problem about the T-SQL protocol or more precisely TDS is that it is not fully documented. The Open Source implementation in C, called FreeTDS is based in part on an older public specification of an older Sybase SQL server version which MS-SQL server is derived from. That documentation is for version 4.2 of the TDS protocol but the current MS-SQL Server versions use version 7.4. MS has added various extensions to it since the 4.2 version and current MS SQL Servers refuse to connect with a TDS client that doesn't support at least 7.0. Quite a bit of the 5.0 and higher support in FreeTDS was basically reverse engineered through network logs and as such can be considered working for many cases but likely isn't fully protocol compliant. While one can implement a native LabVIEW library for the TDS protocol using the TCP/IP VIs, this would have to be based in large parts on the openly available protocol documentation of the TDS 4.2 specification with extra info from the preliminary protocol description in the FreeTDS documentation, possibly helped by peeks into the FreeTDS source code. But that source code is under GPL license, so looking to much at that code is not a good idea to implement a non GPL implementation of said protocol. Also an additional problem with trying to implement this in pure LabVIEW is the fact that newer protocol versions add various encryption and compression features, that are not easily implemented in pure LabVIEW.
  14. I'm not sure you can blame Linux for this. Packed Library is an entirely LabVIEW specific feature. It's basically a ZIP archive with an executable header for reasons I don't know. It definitely is not instantiated through OS loader code, so the executable header looks mostly be tacked on for version resource purposes. It seems unneccessary. The entire code in packed libraries is LabVIEW specific code, the precompiled executable code for the VI and optionally the VI diagram for debugging purposes. The loading and linking of these code resources is done entirely through LabVIEW itself. So what the problem with packed libraries on NI Linux RT systems is, I have no idea.
  15. What do you expect? The DSC system is a collection of many shared libraries that work together. Someone cracked the Windows version apparently that you are using. The realtime variants of those shared libraries have to be different because they can't rely on the same Windows API (Pharlap ETS) or are for completely different CPUs (VxWorks) or operating system (NI Linux RT). They can't use the license manager as used for the Windows version, but I'm sure NI is smart enough to employ some kind of protection there too. But you will be hard pressured to find a script kiddie who has such hardware available and is willing and able to crack this for you. Asking in a public forum about this is definitely not the smartest move you can do!
  16. While I think that the remark in itself wasn't helpful I do understand where it comes from. In many open source projects trying to interface to them from another software is like trying to continuously keep a moving target in focus. Granted, maintaining backwards compatiblity can be a painful process and there is something to say about starting with a clean slate at some point. And of course often the open source programmer is dedicating his own free time to the cause, so it is really his decision if he rather spends it to keep the software compatible or develop new exciting features and change whatever is needed to change during that without considering the possible consequences. Still I think a bit more discipline wouldn't normally hurt. It's sometimes the difference between a cool but for many applications pretty unusable solution and a really helpful and useful piece of software. Another thing are changes made on purpose for the sake of disallowing their use from certain types of clients. That I have a pretty ambivalent feeling about. It seldom prevents what they try to block, but it causes lots of mischief for the users. The IMAQdx link you provided refers to a forward compatibility issue. That is something that is very difficult to provide. There are techniques to help with that somewhat but they more often than not tend to take up more code and complexity than the entire rest of the library, so in short basically never worth the effort. Working in regulated industries might be an exception here.
  17. Well Python 2.3 should be indeed ok, although I never tested with numpy and similar in that version. But that is so old, it's like requiring people to work with Linux 2.2 kernels or Windows 2000. Right! You can spend many man hours to get LabPython working correctly with current version, quite a few more man hours to get the PostLVUserEvent() working as well (it's asynchronous operation and while no rocket science really, involved enough that I have to wrap my mind around it every time again, when trying to implement it somewhere). Or you implement a client server RPC scheme in LabVIEW and Python and just pass around the information that way. The second is a lot easier, easily expandable by other people with absolutely no c knowledge, and much easier to debug too.
  18. Of course. I never said otherwise. But we were not really discussing LabPython at this point since it has quite a few issues that would require some serious investment into the code. The solutions we were discussing where more along the lines of running Python in its own process and communicate between Python and LabVIEW through some means of interapplication communication like nanomsg, Zeromq or a custom made TCP/IP or UDP server client communication scheme. Refer to this post for a list of problems that I'm aware of for the current version of LabPython.
  19. I have not moved anything to github and am very unlikely to do. Besides that I find git not very easy to use I have come across way to many projects that were taken from somewhere, put on github and then abandoned. The 4.0.0.4 version of LabPython is on the old CVS repository for the LabPython project on sourceforge, but I did add the LabPython project with some initial improvements to the shared library to the newer SVN repository of the OpenG Toolkit project on sourceforge. That is as far as I'm concerned the current canonical version of LabPython, although no new release package has been created for a few reasons: -The changes I did to the C code are only a few minimal imrovements to make LabPython compile with the Python 2.7 headers. Only very brief testing has been done with that. More changes to the C code and a lot more testing would be needed to make LabPython compatible with Python 3.x. - More changes need to be made to the code to allow it to properly work in a 64 bit environment. Currently the pointer to the LabPython private management structure which also maintains the interpreter state is directly passed to LabVIEW and then treated as a typed log file refnum. LabVIEW refnums however are 32 bit integers, so a 64 bit pointer will not fit into that. The quick and dirty fix is to change the refnum to a 64 bit integer and configure all CLNs to pass it as a pointer sized variable to the shared library. But that will only work fro LabVIEW 2009 on onwards which probably isn't a big issue anymore. The bigger issue is that a simple integer will not prevent a newby user to wire just about anything to the control and cause the shared library to crash hard when it tries to access the invalid pointer. -There is currently serious problem when trying to use non-thread safe Python modules like numpy and similar from within LabPython. These modules assume that its functions are always executed from within the same OS thread and context. LabPython doesn't enforce that and LabVIEW happily will call it from multiple threads if possible, which makes those modules simply fail to work. LabPython tries to use the interpreter lock that the Python API does provide, but either that is not enough or they changed something between Python 2.3/2.4 and later versions in this respect that makes LabPython not correctly use this lock. Getting this part debugged will be a major investment. Documentation about the interpreter lock and thread safety of the Python interpreter are scarce and inconsistent.
  20. I can only echo neds remarks. Calling any of the LabVIEW manager functions from a different process than LabVIEW itself is doomed to fail. If you wanted to call this function through the Python ctypes interface, the according Python interpreter has to run inside the LabVIEW process, just as what LabPython attempts to do. Trying to do that from a seperate Python execution interpreter is doomed without proper interprocess communication like through nanomsg, ZeroMQ or your own TCP/IP, UDP deamon. This is no fault of LabVIEW or Python but simply proper process separation through protected mode memory and similar involved techniques, fully in effect since Windows NT.
  21. I'm not sure I understand you well here. If the library offers to install semaphore callbacks that is of course preferable from a performance viewpoint but you can still choose to protect it on the calling side by a semaphore instead (and you could even use an implicit serialization by packing all CLNs into the same VI with an extra function selector and setting the VI to not be reentrant) instead of wrapping each CLN into an optain semaphore and release semaphore. A library offering semaphore callback installation is pretty likely to only use them around critical code sections so yes there might be many function calls that don't invoke a semaphore lock at all as it is not needed there. Even when it is needed it may choose to do so only around critical accesses, freeing the semaphore during (relatively) lengthy calculations so that other parallel calls are not locked, which can result in quite a bit of performance when called from a true multitasking system like LabVIEW.
  22. As has been already pointed out, there are a number of possible reasons why a library could be not thread safe. The most common being the use of global variables in the library. One solution here is to always call the library from the same thread. Since a thread can't split magically into two threads, that is a safe method to call such a library. Theoretically a library developer could categorize each function if it makes use of any global and sort the library API's into safe functions who don't access any global state and into non-safe functions who need to be called in a protected way. Another way is to use a semaphore. That can be done explicitedly by the caller (what drjdpowell describes) or in the library itself but the later has the potential to lockup if the library uses multiple global resources that are each protected by their own semaphore. OpenSSL which Shaun probably refers to, requires the caller to install callback functions that provide the semaphore functionality and which OpenSSL then uses to protect access to its internal global variables. Without having installed those callbacks OpenSSL is not threadsafe and dies catastrophally rather sooner than later when called from LabVIEW in multithreaded mode. An entirely different issue is thread local storage. That is memory that the OS reserves and associates with every thread. When you call a library that uses TLS from a multithreaded environment you have to make sure that the current thread has the library specific TLS slots initialized to the correct values. The OpenGL library is such a library and if you checkout the LabVIEW examples you will see that each C function wrapper on entry copies the TLS values from the current refnum to the TLS and on exit restores those values from TLS back into the refnum. In a way it's another way of global storage but requires a completely different approach. But for all of these issues guaranteeing that all library functions are always called from the same thread solves the problems too.
  23. Well, Lua for LabVIEW would give you a lot of the things you hope for but it is not free. So that is the main reason I didn't really push it as a viable option.
  24. While finding the root cause is of course always a good thing, networking is definitely not something that you can simply rely to work always uninterrupted. Any stable networking library will have to implement some kind of retry scheme at some point. HTTP did this traditionally by usually reopening a connection for every new request. Wasteful but very stable! Newer HTTP communication supports a keep alive feature, but with the additional provision to close the connection on any error anyways and on the client side reconnecting again on every possible error including when the server closed the connection forcefully despite being asked to please keep it alive. Most networks and especially TCP/IP were never designed to guarantee uninterrupted connections. What TCP guarantees is a clear success or failure on any packet transmission and also proper order of successful packets in the same order as they were send, but nothing more. UDP on the other hand doesn't even guarantee any of these.
  25. It's no magic really, although I haven't used it myself yet. I make use of other features related to so called UserDataRefnums that are although not really documented a bit more powerful and flexible than the (IMHO misnamed) "DLLs Callbacks". Basically each Call Library Node instance has its own copy of an InstanceDataPointer. This is simply a pointer-sized variable that is associated with a specific Call Library Node. You have the three "Callback functions" Reserve(), Unreserve() and Abort(), each with the same prototoype MgErr (*proc)(InstanceDataPtr *instanceState); So each of them gets a reference to the the Call Library Node instance specific pointer-sized variable location.You could store in there directly any 32 bit information (it's of course 64-bit on 64-bit LabVIEW but you do not want to store more than 32-bits in there for compatibility reasons for the case where you might need to support 32-bit LabVIEW and OSes, such as Pharlap, VxWorks and NI Linux ARM targets) but more likely you will allocate a memory block in Reserve() and return the pointer to that memory block in this parameter. In addition you should make sure the memory is initialized in a meaningful way for your other functions to work properly. The Unreserve() callback is called before LabVIEW wants to unload the VI containing the CLN in order to deallocate anything that might have been allocated or opened by the other functions in the InstanceDataPointer including the InstanceDataPointer itself. Abort() obviously will be called by LabVIEW when the user aborts the VI hierarchy. Now these three functions in itself are not very helpful on their own but where it gets really useful is when you add the special function parameter "InstanceDataPointer" to the parameter list in the Call Library Node configuration. This parameter will not be visible on the diagram for that Call Library Node. Instead LabVIEW will pass the same InstanceDataPointer to the library function as what is passed to the three callback functions. Your function can then store extra information during execution of the function in that InstanceDataPointer that Abort() can use to properly abort any operation that the function itself might have started in the background, including closing files, aborting any asynchronous operation it started, etc, etc. Depending on the complexity you can probably even get away with not implementing the Reserve() function specifically but instead have each function invocation check if the InstanceDataPointer is NULL and then allocate the necessary resources at that point. It may be a performance optimization in not allocating an InstanceDataPointer on load of the VI but only on first execution, so if someone only loads the code without ever starting it, you won't unnecessarily allocate it. If you ever had the "joy" of using Windows API functions with asynchronous operation you will recognize this scheme from the LPOVERLAPPED data pointer those functions use. Remains to stress the fact that every Call Library Node instance has its own private InstanceDataPointer. So if you have 10 Call Library Nodes on your diagram all calling the same library function you still end up with at least 10 InstanceDataPointers. I say here "at least" since this would be multiplied with the number of clones that exist for this particular VI when you have a reentrant VI. As to providing ready made samples with code, that is a crux with this kind of advanced functionality. As it involves asynchronous programming it really is a rather advanced topic. Anyone who understands the explanation as above will pretty readily be able to apply it for their specific application and others who don't won't be helped much with an example that doesn't match their specific use case almost perfectly. Even I get myself regularly lost in the pointer nirvanas where an asynchronous task is accessing the wrong pointer somewhere that the debugger is having a hard time to reach into.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.