Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. I believe LabVIEW used to do that too, when it was including the Great Cycle Smart Memory Heap Manager. Or were there two separate memory manager backends that LabVIEW could use? I'm not sure anymore and the details are fading, but fact is that LabVIEW had some intermediate memory manager layer on top of the OS memory manager that allowed it to not only debug memory usage on a more detailed level, but also consolidate many individual memory request by LabVIEW into much fewer and bigger ones for the OS. Great Circle was absorbed at some time and is of little significance nowadays other than for special, bad performing applications that get patched up by using Great Circle . Smartheap is apparently still on sale at http://www.microquill.com/. So it seems LabVIEW had at some point the option to replace the standard C Runtime memory manager functions with different backends, and was also shipped with them at some point. This supposedly helped to debug memory issues, and may still be used internally but could be an option to reduce the handle hunger from LabVIEW. So that LabVIEW is even by Mark Russinovich's standards a well behaving application .
  2. Well it's not invalid, but doesn't really do what you want directly. In order for the method to be executed on a VI you have to open a VI reference to it, so the entire VI hierarchy is already loaded and iterating through the list to open a VI reference to it does not really do anything useful anymore as the VI is already in memory. What you would have to do, is at built time, determine your hierarchy, sort that hierarchy in ascending order from lowest level VI to higher VI, possibly suppressing some of the lowest level VIs altogether and then save that information to a configuration file that your executable reads and starts doing the loading of the VIs from bottom up.After each Open VI Reference VI you would get more VI's loaded into memory and could use the percentage of already loaded VIs to the total amount of VIs for your progress bar. But executing your Get VI Hierarchy method node in your executable will cause the progress bar to stall while LabVIEW opens the Top level VI and then VERY quickly go through the Open VI References since it doesn't really need to do much anymore, but find the already loaded VI somewhere in memory.
  3. My natural instinct here would say, if you try to implement a singleton, which you apparently do, why not replace the queue with a Functional Global Variable VI that exposes Optain and Release methods? That also allows you to include any internal refcounting that you may require. The Obtain case creates your .Net object when the refcount is 0 and always increments the refcount, returning that .Net refnum and the Release case decrements the refcount and closes the .Net refnum when it reaches 0. Since everything from testing the refcount to acting on it accordingly happens inside the FGV, the problem on potential race conditions doesn't even exist. I'm sure this could be done with the singleton LVOOP pattern too, but functional global variables are ideal in LabVIEW to implement singletons. No need to do any semaphores or what else to avoid potential race conditions. A SEQ may seem great to implement a storage of a singleton object without having to do a separate VI, but if you need any kind of control over this objects lifetime, a FGV is the preferred choice since it allows implementing the lifetime management inside the FGV without the danger of creating race conditions.
  4. The most straight forward way to use them is to write C code, and for that you need a C compiler such as MS Visual Studio or LabWindows CVI. In a time long, long ago, this was used to create so called CINs but they are part of a long gone era, and the way nowadays is to create DLLs (shared libraries on non Windows platforms) and incorporate those shared libraries/DLLs into LabVIEW using the Call Library Node. In order for the C compiler to be able to find the declarations of those functions you have to #include "extcode.h" and possibly other include files from the <LabVIEW>/cintools directory. You also link the resulting code with labviewv.lib from the same directory so that the resulting DLL can be linked and later knows how to link to those manager functions in LabVIEW at runtime. Now if you need to call only very few of those manager functions, there is a little trick in that you define in the Call Library Node as library Name "LabVIEW" (without quotes and case does matter!!) and then you can configure the Call Library Node to match the definition of the manager function and LabVIEW will call into itself and execute that function. This is however an advanced feature. You do need to know how to configure the Call Library Node correctly, you need to be familiar with how LabVIEW stores its native data in memory, and should also be very savy about pointers and such in general. Also it is not very maintenance friendly since you have to fiddle in the LabVIEW diagram with C intrinsicaties and everytime you need to change something you have to go in there again and try to find out what you did last time. Editing a C file and creating a new DLL/shared library is in the long term so much easier, and once you end up calling more than a few C functions through the Call Library Node you really want to place the complicated C details in a C module and load that into LabVIEW through the Call Library Node, instead of making complicated push ups and more complicated fitness exercises on the LabVIEW diagram. What your problem is, I'm not sure, but if it is that you do not get the entire string since there seems to be NULL bytes already at the beginning, then you probably got Unicode16 strings. LabVIEW does NOT use Unicode strings at all internally but always uses the MBCS coding, and all LabVIEW manager string functions consequently operate on MBCS strings, so they can't deal with Unicode16 strings at all. In that case you better use Windows API functions that can work with WCHAR strings or alternatively have to convert the Unicode string into MBCS early on using the Windows API function WideCharToMultiByte().
  5. I didn't mean to indicate to call this function with the Call Library Node. While it would work for this one (and IID_PPV_ARGS() is just a casting macro to make sure the parameter passes to the function without compiling warning), the resulting ptbl is a pointer to a virtual table dispatch structure and that is a bit nasty to reference from a Call Library Node. However there is no way without invoking at least 3 or so methods from that virtual table to implement anything useful in terms of TaskBar App thumbs and friends. My only acceptable solution to this would be to write a little C code that is compiled into a DLL and then called from LabVIEW. This is also mandated in my opinion by the fact that any feedback from the Taskbar to the application is done through the Windows message queue. Hooking that, while possible with the LabVIEW Windows message queue library floating around, is a painful process, and much more cleanly done in the same C code DLL. As to creating your mentioned C header parser: Believe me you don't want to go there. I implemented such a beast for a project where I had to adapt to register definitions for a CAN device based on C type declarations in a database like structure. It "only" had to be able to identify the basic C types, structures and arrays and the typedefs made from them. And it already got a major task and one that, while it worked I didn't particularly feel confident about, to make some changes to it without breaking something else in it. Without an extensive unit test framework such a thing is already unmaintainable in any form. And while it theoretically also could parse function declarations that part never got tested at all, since it was not a requirement for the task at hand. And I'm sure there are still many C header nasties, that program can't parse properly. The C header parser used in the Import Library Wizard is most likely a bit further than the one I had written, but it also has it's limits and very specifically only maintains the information it requires to create the Call Library Node configuration for a particular function. This means that enum values are simply discarded, as the only information this library needs is the actual size of the enum, not its detailed definition. Same here. Add to this the fact that .Net is limiting yourself automatically to Windows only (no Mono is no solution as LabVIEW still lacks the .Net support on non-Windows platforms to even be theoretically able to call Mono. In practice it would almost surely fail even if it had such support.) This may seem like a moot point in this case with Windows shell integration, but I'm working regularly on other things where multi-platform support is not only an option but sometimes a requirement. And once you start to go down the C DLL path for such things you really don't feel like learning yet another programming environment like .Net, that is not only heavyweight and clunky but also limits yourself to a specific platform. I'm also sure that the frequent versioning of .Net is not just a coincidence but a strategy, to make following it getting a true challenge both for Mono as well as competing application development environments to MS Visual development offerings. By avoiding it whenever possible, I'm not limited by this strategy. From the little exposure to .Net so far I have to say it is very amazing how much they copied Java. Many libraries use the exact same naming only adapted to the .Net naming convention of using function names that start with an uppercase letter instead of a lowercase letter as Java uses. Almost feels like someone took the Java interfaces and put them through a tool to convert type names and function names to a different naming convention just for the sake of not being blamed for taking them verbatim.
  6. My first hunch before hearing about the password protected DB VIs was also the DB task. I know of some problems with result sets in the NI DB Toolkit. But as Shaun already said, I can't imagine it to be the NI DB Toolkit since that one is NOT password protected. And as far as I remember, the result set memory leak exists in the IDE too.
  7. [sarcasme] I recommend assembler for that! It has no restrictions in IDE enforced edit time delays, since it only needs the most simple text editor you can envision. Never mind the rest of the experience. {/sarcasme]
  8. I came across another way of slow down a few years back. I inherited a machine control application that consisted of one main VI going over 10MB size, and a whole bunch of subVIs doing stupidly little. The main VI consisted of several stacked sequence structures with up to 100 frames and almost every frame contained a case structure that was only enabled based on a global string that contained in fact the next state to execute. Basically it was a state machine with the sequence frames consisting a grouping of states and to get to one particular stage LabVIEW had to run through each sequence frame, most of them doing nothing since the case structure inside did not react on the current state. Talk about a state machine turned inside out and upside down and then again some. Selecting a wire, node or subVI in that diagram was slooooooooooow, as each of these actions caused a several seconds delay. Moving a node was equally slow as each move caused a similar delay. It was a total pain to do anything on that application and it needed work as it had several bugs. I invested several days restructuring the architecture into a real state machine design and placing several logical sub state-machines into their own subVI, also removing about 99% of the several 1000 globals and once I had done that, the whole editing got snappy again. The main VI had been reduced to about 1MB and the different sub statemachine VIs together took mabye another 4MB of disk space. Replacing all the globals with a few shift registers had slashed the disk print of the VIs almost to half. I did make the driver VIs a bit smarter by moving some of the logic into them instead of copying the same operation repeatedly in the main VI but getting rid of the enormous number of sequence frames as well as the many globals that were mostly only a workaround for the not existing shift registers did both get the disk size down a lot as well as make the editing again a useful operation instead of being simply an annoyance. Surprisingly enough the runtime performance of the original application wasn't really bad, but the redesigned system hardly got ever over 5% CPU usage even when it was busy controlling all the serial devices and motion systems. How the original programmer ever was able to finish his system with such an architecture and the horrible edit delays for almost every single mouse move I can't imagine. It for sure wasn't a LabVIEW programmer and I later learned that he had only done Visual Basic applications before.
  9. From what I read on MDSN it is fairly easy to do that from C/C++. You just want to use the ITaskBar3 interface in shobjidl.h. A single call to // Create an instance of ITaskbarList3 ITaskBarList3 *ptbl; HRESULT hr = CoCreateInstance(CLSID_TaskbarList, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&ptbl); [/CODE] And then you can call the relevant methods of the ITaskBarList3 COM object. This even works from pure C. [CODE] if (SUCCEEDED(hr)) { // Declare the image list that contains the button images. hr = ptbl->ThumbBarSetImageList(hwnd, himl); if (SUCCEEDED(hr)) { // Attach the toolbar to the thumbnail hr = ptbl->ThumbBarAddButtons(hwnd, ARRAYSIZE(thbButtons), &thbButtons); } ptbl->Release(); } return hr; [/CODE] However handling of events happens through Windows messages, so you have to hook the Windows message queue in LabVIEW and this is were things always get a bit hairy with Windows shell integration in LabVIEW. Same about ShellNotify and other such things. Same here! A fun project to do but with very little real world benefit for the type of application we generally do in LabVIEW.
  10. Let's see. I'm by no means a Java crack and am not sure I ever will be. My interest with Java was only started when I wanted to do some stuff under Android, mostly for fun so far. Incidentally looking at the Java scene I think the flair of the days when Sun was yielding the scepter has mostly gone. Oracle seems to be seen by many as an unfriendly king and by some even as a hostile tyrant. Google sort of took over a bit but tries to keep a low profile in terms of Java. They just use the idea but don't really promote it at all, probably also because of the law suits. I think Oracle has done the single most effective move to kill Java as an idea with their recent actions. But lets get back at your question: There is the java.util.queue interface with it's many incarnations like AbstractQueue, ArrayBlockingQueue, ConcurrentLinkedQueue, DelayQueue, LinkedBlockingQueue, LinkedList, PriorityBlockingQueue, PriorityQueue, SynchronousQueue. Then we have the inherent synchonous(obj) { } keyword that can be used on any object to create blocks around code that needs to be protected. Only a single block of code can be at any time inside a synchronous block for a specific object. And last but not least there is the notify() and wait() method the java object class implements which every other Java object is derived from directly or through other object classes. These two methods are a bit special since they do actually work together with the synchronous(obj) keyword. I haven't mastered this part yet fully but I had trouble to use those methods unless the code block in which they are called is protected with the synchonous(obj) keyword but reading suggests that this should not always be necessary. You probably have a point there. All I can say to that is that it seems cleaner to me to have a specific object interface be able to manage it's thread needs on its own by using the appropriate thread policy such as a threadpool with whatever limits seems useful, and let the proven OS implementation handle the distribution of the threads on whatever cores are available. It probably won't speed up things at all to do so, but I just like to have the idea of being in control if I feel the need is there. On the other hand, I have not to complain about how LabVIEW handles concurrent parallel execution so far. It seems just as capable to handle multiple code parts that can operate in parallel with the available CPU power as a Java program that uses several threads. So in the end there may be nothing really left but the feeling of having more control over the things, which is also sometimes a reason for me to implement certain things in C and incorporate them into LabVIEW as shared library. And because of that you respond to it!? I probably will try to take a look at your project. Always interesting to see new things, eventhough I'm not sure I will groke the whole picture.
  11. It's not the Open VI Reference Node in itself that blocks out the UI thread but the load operation of a VI hierarchy. This piece of code is very delicate as it needs to update large amounts of global tables that keep track of all loaded VIs, their relationship to each other, linking information and what else that I couldn't even dream up. So yes I'm sure the Open VI Reference and the application load operation do use the same LoadVIHierarchy() function and that this function is the UI blocking culprit. And I'm afraid calling a user VI as callback during this operation could be fairly costly. LabVIEW doesn't lock the UI thread for fun during this operation and invoking VIs may likely have a possible impact on the resources that this locking tries to protect, so that the locking may have to be released for the duration of the VI invoke. This could add up to quite a bit more than a few 100 us for each call. Anyhow even if you get this callback that tells you about the operation, be it a user event or a specific VI invoke operation, how do you suppose to give any feedback to the user if the UI thread is blocked?
  12. Why would you use string arrays, if the example you gave in your first post only contains numbers? Also I think your data structure design is highly flawed, if the data in the first post is all you want to sort. What you should look at is to use a 1D array of integers for your first array and an 1D array of cluster containing an integer and a float for your second array and the result.
  13. Shaun, the problem with this is that events are processed in the UI thread and the Open VI Reference blocks the UI thread very effectively. The reason for that is that it needs to update various global tables frequently during the load and can't have any other part of LabVIEW potentially messing with these tables while it is busy. And unlocking the UI thread for the duration of the event is a bad idea as it would inevitably add a significant overhead to the load operation that would be without doubt noticable. What I do is placing an animated GIF on the Splash screen. It runs even when the VI is not executing , but a user of the application doesn't notice that. However it is only a partly satisfactury solution since the GIF does animate properly in the IDE during the Open VI Reference call, but seems to stop in a built executable anyhow momentarily. I solve it by making the Main VI itself loading various components dynamically so that the load operation is really divided in several Open VI Reference operations. The Splash Screen simply has a string control that the main can send messages too, for status information, and waits until the main is deciding to open its Front Panel, indicating that it is ready to take over from now. I don't think that suggestion is thought out well. A separate SSH protocol library is of very little use, as it only implements the SSH protocol itself but wouldn't proliferate on the used SSL capabilities for other protocols. I would rather see a SSL support for TCP sockets that can be configured transparently. I tried to start with something like this in the network library that I posted quite some time ago here on lavag. The idea is that it is possible to setup SSL parameters and register them for a network socket, so it uses them automatically. This is because SSL really is meant to be transparent in terms of protocol handling to higher located protocols like HTTPS, which is really just the HTTP protocol transferred through a SSL secured socket.
  14. I think you should not only terminate on a Connection Closed by Peer error but just about any possible error except maybe the timeout error, although that is debatable too. And yes you would want to filter the Connection Closed by Peer error after the while loop, since that is a valid way to terminate the connection and not an error in terms of the user of your HTTP Get function. But error 56 is definitely a timeout error, Connection Closed by Peer is error code 66
  15. Good point that dataflow inherently solves some cases that futures might be used for in conventional languages.
  16. Well you may want it at some point but not necessarily in the same application and just as a copy of files on the harddisk. But you are right, whatever you want it is usually never as trivial as just shooting of the request and forgetting it. You want for instance usually be informed if one of the downloads didn't succeed and you also want a way to not have the library wait forever on never received data. Java threading can be a bit more powerful than just blindly spawning threads. If you make consequently use of the java.util.concurrent package, you can create very powerful systems that can employ various forms of multithreading with very little programming effort. At the core are so called executors that can have various characteristics such as single threads, bounded and unbounded threadpools and even scheduled variants of them. Especially thread pools are very handy, since creation and destruction of threads is a very expensive operation, specifically under Windows. By using threadpools you only have that penalty once and still can dynamically assign new "Tasks" to those threads. You are fully right here, but the actual threadpool configuration in LabVIEW is very static. There is a VI in vi.lib that you can use to configure the numbers of threads for each execution system, but this used to require a LabVIEW restart in order to make those changes effective. Not sure if that is still the case in the latest LabVIEW versions. LabVIEW itself implements still some sort of cooperative multithreading on top of the OS multithread support just as it always did even before LabVIEW got OS thread support in 5.0 or 5.1. So I would guess that the situation is not necessarily as bad, since you can run multiple VIs in the same execution system and LabVIEW will distribute the available threads on the actual LabVIEW clumps, as they call the indivisible code sequences they identify and schedule in their homebrew cooperative multitasking system. You do have to be carefull however about blocking functions that cause a switch to the UI thread as that could completely defy the purpose of any architecture trying to implement parallel code execution. Not sure how the new Call Asynchronous fits into this. I would hope they retained all the advantages of the synchonous Call By Reference but added some way of actually executing the according call in some parallel thread like system, much like the Run Method does. But I haven't looked into that yet Edit: I just read up on it a bit, and the Asynchronous Call by Reference looks very much like a Future in itself. No need to employ LVOOP for it, jupiieee! I never intended to say it would be useless, just not as generic and powerfull as the Java version. Lol, I so much missed the hooded smiley here that is sometimes available in other boards. Should have scrolled in the list instead of assuming it isn't there. I knew I was poking some peoples cookies with this, and that is part of the reason I even put it there.
  17. According to the manual it uses a RS-485 interface and supports CompoWay/F, SYSWAY, or Modbus protocol. What Modbus registers to read for the setpoint, current value and other values and how to interpret them should be mentioned in some sort of programming manual or such. The Operation Manual on the site at least doesn't document anything about that.
  18. hoovah is right, LabVIEW does lazy memory deallocation and that is not really a bad thing. Every roundtrip to the OS to deallocate memory that often will need to be allocated a little later on is a rather costly operation. The drawback is, that once LabVIEW hangs on to memory it pretty much keeps that memory for as long as it can but it is usually (baring any bugs) reusing that memory quite efficiently when necessary. This could be bad for other applications if you have a LabVIEW memory grabber in memory and don't want to quit it, but despite of not currently munching on huge data, not giving the other application enough memory. It has also another positive side besides performance in that once LabVIEW was able to get the memory, another application will not be able to grab that memory and make a 2nd run of the memory hungry LabVIEW operation suddenly scream about not enough memory. TDMS is a nice tool but you have to be aware that it can mount up to a lot of data and that reading in this data in one big glob can very easily overrun even modest configured system resources.
  19. Your description is not entirely clear. When you say "I pass from the binary string to an "image data" cluster" you mean that you have the JPG/PNG Data to LV Image in that stream? If so there is to my knowledge a way to do the opposite for the JPG case with the IMAQ Flatten Image to String function but not in the direction you are looking for as the use of the IMAQ Flattten Image Option function and the standard LabVIEW Flatten and Unflatten functions is unfortunately also not a solution since the Flatten adds extra info to the compressed data stream, that the Unflatten seems to expect (although it seems someone in the NI forums had some success with using the resulting data stream as normal JPEG stream, until he moved to LabVIEW 64 bit and got bad results).
  20. I only know them from Java and only enough to use them in not to complicated ways. What they actually do as far as I understand it, is running as a separate thread (which can be a dedicated thread or one from a shared thread pool or a few other variants thereof from the java.util.concurrent package) and do whatever they need to do in the background. If they are there to produce something that will eventually be used at some point in the application you can end up with a blocking condition nevertheless. But they are very powerful if the actual consumer of the future result is not time constrained, such as multiple HTTP downloads for instance. If the HTTP client library is supporting Futures you can simply shoot off multiple downloads by issuing downloads in a tight loop, letting the Future handle the actual download in the background. It could for instance save the received data to disk and terminates itself after that freeing the used thread again. If you don't need the actual data in the application itself you could then forget the Future completely once you have issued it. The way I understand it a Future is some sort of callback that is running in its own thread context for the duration of its lifetime and has some extra functions to manage it such as canceling, checking its status (isDone(), isCancelled()) and even waiting for it's result if that need should arise. What the callback itself does is entirely up to the implementer. Depending on the chosen executor model the HTTP client could still block such as when using a bounded thread pool and you issue more downloads than there are threads available in the thread pool. All that said I do think there are problems to implement Futures in such a way in LabVIEW. LabVIEW does automatic multhtreading management in its diagrams with fixed, bounded thread pools. There is no easy way to reconfigure that threading on the fly. So there is no generic way to implement Futures that run off a theoretically unbounded thread pool should that need arise, and if you start to use Futures throughout your application in many other classes you run into a thread pool exhaustion quickly anyhow. So I don't see how one could implement Futures in LabVIEW in the same way and still stay generic, which is the idea of the Future implementation in Java. The Future itself does not even define the datatype it operates on, but leaves that to the implementor of the class using the Future. That are all concepts where LabVIEW can't fully keep up with (and one of the reason I'm not convinced diving into LVOOP is really worth the hassle )
  21. No C does not have any memory penalty but if you cast an int32 into a 64 bit pointer and ignore the warning and then try to access it as pointer, you crash. LabVIEW is about 3 level higher in terms of high level language than C. It has among other things a strong data type system, a rather effective automatic memory management, no need to worry about allocating array buffers before you can use them and releasing them afterwards, and virtually no possibility to make it crash (if you don't work on interfacing it to external code) and quite a few other things. This comes sometimes at some cost, such as needing to make sure the buffer is allocated. Also the nature of LabVIEW datatypes seperates into two very different types. Those that are represented as handles (strings and arrays) and those that aren't. There is no way LabVIEW could "typecast" between these two fundamental types without creating a true data copy. And even if you typecast between arrays, such as typecasting a string into an array of 32 bit integers, it can't just do so, since the 32 bit integer array needs to have a byte length of a multiple of the element size, so if your input string (byte array) length is not fully dividable by four it will need to create a new buffer anyhow. Currently it does so in all cases, as that makes the code simpler and saves an extra check on the actual fitness of the input array size in the case where both inputs and outputs are arrays. It could of course go and check for the possibility of inlining the Typecast but if your wire happens to run to any other function that wants to use the string or array inline too, it needs to create a copy anyhow. All in all this smartness will add a lot of code to the Typecast function, for a benefit that is only in special cases achievable.
  22. I only know Futures from Java, where they are an integral part of the java.util.concurrent package. And the established way in Java for this is a blocking get() method on the Future, with an optional timeout. The Future also has a cancel() method. To code that up without some potential race condition is however not trivial and Sun certainly had a few trials at it, before getting it really working.
  23. I would almost bet my hat, that that is also what the "Is NaN/Refnum/.." function does. Which makes it likely a similar fast operation as the typecast with following bitmask operation. And because of that I would prefer the explicit and clear use of the Is NaN primitive any time above the much less clear "Advanced Operation" hack, even if it would save a few CPU cycles.
  24. That should work if the NaNs are guaranteed to come from LabVIEW itself but could fail because of the reason you mention, if the NaN could come from somewhere else such as a network bytestream.
  25. Maybe LLVM does that, but I doubt that the original LabVIEW compiler did boolean logic reduction. Even then the difference is likely very small in comparison to the time needed for the Variant case for instance. Two logic operations versus six does not amount to many nanoseconds for sure on modern CPU architectures. And it is clear from a quick glance what asbo did wrong when copying the code. He should have ORed the two isNaN? results, not one of them and the isEqual? result.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.