Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. Well, out of curiosity I did spend a few hours on this. But it seems Windows is effectively denying every cooperation in making this functionality work from within LabVIEW. I can create a small Windows executable that uses the functions to assign an imagelist and the thumbbar definitions for the tumbbar buttons just fine, and I can use most of the other TaskBarList methods from within LabVIEW easily such as the Progress Bar functionality but any attempt to set the imagelist for the buttons from within LabVIEW fails with a very useless E_FAIL error message. Not sure what that would be really.
  2. Is it? And what did you learn from this awesome look behind the curtains? Yes there is a function you can use to translate an error code into an error string. But the VI that you looked at does that already for you without the need to bother about correct calling convention, parameter type setup and a few other nasty C details. Not sure I see the awesomeness here, other than the desire to feed your own curiosity and find more ways to shoot in your foot.
  3. What the name says: LabVIEW. Basically LabVIEW exports a lot of so called manager functions that can be called from C code, such as when you write a DLL (or shared library on non Windows systems). A lot of those manager functions are described in the External Code Reference Manual which comes as part of the help files in your LabVIEW installation. The LabVIEW library name is a special keyword, that tells the Call Library Node to link to whatever the current LabVIEW execution kernel is (LabVIEW.exe in the IDE, lvrt.dll in a built app). Note the case of the letters, which needs to match exactly with the official spelling. And before you ask, LabVIEW exports more functions than are described in the manual such as the one you found. Some are used internally by LabVIEW VIs but they are usually password protected. Sometimes one slips through the cracks. Most of those undocumented functions make no sense to be called outside of a very specific context, and some are rather harmful if not called in a very specific way. A lot of them are really only exported so that LabVIEW can itself use them as a sort callback and they make not really any sense to be called from a VI diagram.
  4. Well easy! But the hardest part of getting the SCC Provider Interface figured out is indeed done. Interfacing SVN through the command line interface isn't too complicated but I was at some point looking to integrate it completely as DLL and that is quite a different story. Since it also is a potential maintenance nightmare I abandoned that approach completely. SVN does normally guarantee backwards compatibility for the SVN command line interface, but no such guarantee exists for the binary API.
  5. If you follow that thread and go to the last post, you can see that Ton has actually already released both the provider as well as the API in the Code Repository. So you just need to download it and give it a testdrive.
  6. Daklu, I'm not trying to be difficult here but rather would like to understand how a LVOOP singleton would have much less dependency tree effect here. Yes Obtain and Release would be two different method VIs and the data dependency between these two would be through the LVOOP object wire instead of encapsulated in the single FGV. But that would decouple only the Obtain and Release operation as far as the hierarchy tree is concerned, not the fact that you use this object in various, possibly very loosely coupled clients. I say loosely coupled since they obviously have at least one common dependency, namely the protected resource in the singleton. And while easy extensibility is always nice to have, I'm not sure I see much possibility for that in such singleton objects. Would you care to elaborate on the dependency tree effect in this specific case and maybe also give an example of a desirable extension that would be much harder to add to the FGV than the LVOOP singleton?
  7. Well an FGV is the most trivial solution in terms of needing to code. It's not as explicit as a specific semaphore around everything and not as OOP as a true singleton class, but in terms of LabVIEW programming, something which is truely tried and proven. I would also think that it is probably the most performent solution, as the locking around non-reentrant VIs is a fully inherent operation of LabVIEW's execution scheduling and I doubt that explicit semaphore calls can be as quick as this. Also for me it is a natural choice since I use them often, even when the singleton functionality isn't an advantage but a liability, simply because I can whip them out in a short time, control everything I want and don't need to dig into how LVOOP does things. And reading about people complaints of unstable LabVIEW IDEs, when used with LVOOP doesn't exactly make me want to run for it either. I know this sounds like an excuse, but fact is that I have apparently trained myself to use LabVIEW in a way that exposes very little instability, unless I'm tinkering with DLLs, and especially self written DLLs during debug time, but that is something I can't possibly blame LabVIEW for.
  8. I believe LabVIEW used to do that too, when it was including the Great Cycle Smart Memory Heap Manager. Or were there two separate memory manager backends that LabVIEW could use? I'm not sure anymore and the details are fading, but fact is that LabVIEW had some intermediate memory manager layer on top of the OS memory manager that allowed it to not only debug memory usage on a more detailed level, but also consolidate many individual memory request by LabVIEW into much fewer and bigger ones for the OS. Great Circle was absorbed at some time and is of little significance nowadays other than for special, bad performing applications that get patched up by using Great Circle . Smartheap is apparently still on sale at http://www.microquill.com/. So it seems LabVIEW had at some point the option to replace the standard C Runtime memory manager functions with different backends, and was also shipped with them at some point. This supposedly helped to debug memory issues, and may still be used internally but could be an option to reduce the handle hunger from LabVIEW. So that LabVIEW is even by Mark Russinovich's standards a well behaving application .
  9. Well it's not invalid, but doesn't really do what you want directly. In order for the method to be executed on a VI you have to open a VI reference to it, so the entire VI hierarchy is already loaded and iterating through the list to open a VI reference to it does not really do anything useful anymore as the VI is already in memory. What you would have to do, is at built time, determine your hierarchy, sort that hierarchy in ascending order from lowest level VI to higher VI, possibly suppressing some of the lowest level VIs altogether and then save that information to a configuration file that your executable reads and starts doing the loading of the VIs from bottom up.After each Open VI Reference VI you would get more VI's loaded into memory and could use the percentage of already loaded VIs to the total amount of VIs for your progress bar. But executing your Get VI Hierarchy method node in your executable will cause the progress bar to stall while LabVIEW opens the Top level VI and then VERY quickly go through the Open VI References since it doesn't really need to do much anymore, but find the already loaded VI somewhere in memory.
  10. My natural instinct here would say, if you try to implement a singleton, which you apparently do, why not replace the queue with a Functional Global Variable VI that exposes Optain and Release methods? That also allows you to include any internal refcounting that you may require. The Obtain case creates your .Net object when the refcount is 0 and always increments the refcount, returning that .Net refnum and the Release case decrements the refcount and closes the .Net refnum when it reaches 0. Since everything from testing the refcount to acting on it accordingly happens inside the FGV, the problem on potential race conditions doesn't even exist. I'm sure this could be done with the singleton LVOOP pattern too, but functional global variables are ideal in LabVIEW to implement singletons. No need to do any semaphores or what else to avoid potential race conditions. A SEQ may seem great to implement a storage of a singleton object without having to do a separate VI, but if you need any kind of control over this objects lifetime, a FGV is the preferred choice since it allows implementing the lifetime management inside the FGV without the danger of creating race conditions.
  11. The most straight forward way to use them is to write C code, and for that you need a C compiler such as MS Visual Studio or LabWindows CVI. In a time long, long ago, this was used to create so called CINs but they are part of a long gone era, and the way nowadays is to create DLLs (shared libraries on non Windows platforms) and incorporate those shared libraries/DLLs into LabVIEW using the Call Library Node. In order for the C compiler to be able to find the declarations of those functions you have to #include "extcode.h" and possibly other include files from the <LabVIEW>/cintools directory. You also link the resulting code with labviewv.lib from the same directory so that the resulting DLL can be linked and later knows how to link to those manager functions in LabVIEW at runtime. Now if you need to call only very few of those manager functions, there is a little trick in that you define in the Call Library Node as library Name "LabVIEW" (without quotes and case does matter!!) and then you can configure the Call Library Node to match the definition of the manager function and LabVIEW will call into itself and execute that function. This is however an advanced feature. You do need to know how to configure the Call Library Node correctly, you need to be familiar with how LabVIEW stores its native data in memory, and should also be very savy about pointers and such in general. Also it is not very maintenance friendly since you have to fiddle in the LabVIEW diagram with C intrinsicaties and everytime you need to change something you have to go in there again and try to find out what you did last time. Editing a C file and creating a new DLL/shared library is in the long term so much easier, and once you end up calling more than a few C functions through the Call Library Node you really want to place the complicated C details in a C module and load that into LabVIEW through the Call Library Node, instead of making complicated push ups and more complicated fitness exercises on the LabVIEW diagram. What your problem is, I'm not sure, but if it is that you do not get the entire string since there seems to be NULL bytes already at the beginning, then you probably got Unicode16 strings. LabVIEW does NOT use Unicode strings at all internally but always uses the MBCS coding, and all LabVIEW manager string functions consequently operate on MBCS strings, so they can't deal with Unicode16 strings at all. In that case you better use Windows API functions that can work with WCHAR strings or alternatively have to convert the Unicode string into MBCS early on using the Windows API function WideCharToMultiByte().
  12. I didn't mean to indicate to call this function with the Call Library Node. While it would work for this one (and IID_PPV_ARGS() is just a casting macro to make sure the parameter passes to the function without compiling warning), the resulting ptbl is a pointer to a virtual table dispatch structure and that is a bit nasty to reference from a Call Library Node. However there is no way without invoking at least 3 or so methods from that virtual table to implement anything useful in terms of TaskBar App thumbs and friends. My only acceptable solution to this would be to write a little C code that is compiled into a DLL and then called from LabVIEW. This is also mandated in my opinion by the fact that any feedback from the Taskbar to the application is done through the Windows message queue. Hooking that, while possible with the LabVIEW Windows message queue library floating around, is a painful process, and much more cleanly done in the same C code DLL. As to creating your mentioned C header parser: Believe me you don't want to go there. I implemented such a beast for a project where I had to adapt to register definitions for a CAN device based on C type declarations in a database like structure. It "only" had to be able to identify the basic C types, structures and arrays and the typedefs made from them. And it already got a major task and one that, while it worked I didn't particularly feel confident about, to make some changes to it without breaking something else in it. Without an extensive unit test framework such a thing is already unmaintainable in any form. And while it theoretically also could parse function declarations that part never got tested at all, since it was not a requirement for the task at hand. And I'm sure there are still many C header nasties, that program can't parse properly. The C header parser used in the Import Library Wizard is most likely a bit further than the one I had written, but it also has it's limits and very specifically only maintains the information it requires to create the Call Library Node configuration for a particular function. This means that enum values are simply discarded, as the only information this library needs is the actual size of the enum, not its detailed definition. Same here. Add to this the fact that .Net is limiting yourself automatically to Windows only (no Mono is no solution as LabVIEW still lacks the .Net support on non-Windows platforms to even be theoretically able to call Mono. In practice it would almost surely fail even if it had such support.) This may seem like a moot point in this case with Windows shell integration, but I'm working regularly on other things where multi-platform support is not only an option but sometimes a requirement. And once you start to go down the C DLL path for such things you really don't feel like learning yet another programming environment like .Net, that is not only heavyweight and clunky but also limits yourself to a specific platform. I'm also sure that the frequent versioning of .Net is not just a coincidence but a strategy, to make following it getting a true challenge both for Mono as well as competing application development environments to MS Visual development offerings. By avoiding it whenever possible, I'm not limited by this strategy. From the little exposure to .Net so far I have to say it is very amazing how much they copied Java. Many libraries use the exact same naming only adapted to the .Net naming convention of using function names that start with an uppercase letter instead of a lowercase letter as Java uses. Almost feels like someone took the Java interfaces and put them through a tool to convert type names and function names to a different naming convention just for the sake of not being blamed for taking them verbatim.
  13. My first hunch before hearing about the password protected DB VIs was also the DB task. I know of some problems with result sets in the NI DB Toolkit. But as Shaun already said, I can't imagine it to be the NI DB Toolkit since that one is NOT password protected. And as far as I remember, the result set memory leak exists in the IDE too.
  14. [sarcasme] I recommend assembler for that! It has no restrictions in IDE enforced edit time delays, since it only needs the most simple text editor you can envision. Never mind the rest of the experience. {/sarcasme]
  15. I came across another way of slow down a few years back. I inherited a machine control application that consisted of one main VI going over 10MB size, and a whole bunch of subVIs doing stupidly little. The main VI consisted of several stacked sequence structures with up to 100 frames and almost every frame contained a case structure that was only enabled based on a global string that contained in fact the next state to execute. Basically it was a state machine with the sequence frames consisting a grouping of states and to get to one particular stage LabVIEW had to run through each sequence frame, most of them doing nothing since the case structure inside did not react on the current state. Talk about a state machine turned inside out and upside down and then again some. Selecting a wire, node or subVI in that diagram was slooooooooooow, as each of these actions caused a several seconds delay. Moving a node was equally slow as each move caused a similar delay. It was a total pain to do anything on that application and it needed work as it had several bugs. I invested several days restructuring the architecture into a real state machine design and placing several logical sub state-machines into their own subVI, also removing about 99% of the several 1000 globals and once I had done that, the whole editing got snappy again. The main VI had been reduced to about 1MB and the different sub statemachine VIs together took mabye another 4MB of disk space. Replacing all the globals with a few shift registers had slashed the disk print of the VIs almost to half. I did make the driver VIs a bit smarter by moving some of the logic into them instead of copying the same operation repeatedly in the main VI but getting rid of the enormous number of sequence frames as well as the many globals that were mostly only a workaround for the not existing shift registers did both get the disk size down a lot as well as make the editing again a useful operation instead of being simply an annoyance. Surprisingly enough the runtime performance of the original application wasn't really bad, but the redesigned system hardly got ever over 5% CPU usage even when it was busy controlling all the serial devices and motion systems. How the original programmer ever was able to finish his system with such an architecture and the horrible edit delays for almost every single mouse move I can't imagine. It for sure wasn't a LabVIEW programmer and I later learned that he had only done Visual Basic applications before.
  16. From what I read on MDSN it is fairly easy to do that from C/C++. You just want to use the ITaskBar3 interface in shobjidl.h. A single call to // Create an instance of ITaskbarList3 ITaskBarList3 *ptbl; HRESULT hr = CoCreateInstance(CLSID_TaskbarList, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&ptbl); [/CODE] And then you can call the relevant methods of the ITaskBarList3 COM object. This even works from pure C. [CODE] if (SUCCEEDED(hr)) { // Declare the image list that contains the button images. hr = ptbl->ThumbBarSetImageList(hwnd, himl); if (SUCCEEDED(hr)) { // Attach the toolbar to the thumbnail hr = ptbl->ThumbBarAddButtons(hwnd, ARRAYSIZE(thbButtons), &thbButtons); } ptbl->Release(); } return hr; [/CODE] However handling of events happens through Windows messages, so you have to hook the Windows message queue in LabVIEW and this is were things always get a bit hairy with Windows shell integration in LabVIEW. Same about ShellNotify and other such things. Same here! A fun project to do but with very little real world benefit for the type of application we generally do in LabVIEW.
  17. Let's see. I'm by no means a Java crack and am not sure I ever will be. My interest with Java was only started when I wanted to do some stuff under Android, mostly for fun so far. Incidentally looking at the Java scene I think the flair of the days when Sun was yielding the scepter has mostly gone. Oracle seems to be seen by many as an unfriendly king and by some even as a hostile tyrant. Google sort of took over a bit but tries to keep a low profile in terms of Java. They just use the idea but don't really promote it at all, probably also because of the law suits. I think Oracle has done the single most effective move to kill Java as an idea with their recent actions. But lets get back at your question: There is the java.util.queue interface with it's many incarnations like AbstractQueue, ArrayBlockingQueue, ConcurrentLinkedQueue, DelayQueue, LinkedBlockingQueue, LinkedList, PriorityBlockingQueue, PriorityQueue, SynchronousQueue. Then we have the inherent synchonous(obj) { } keyword that can be used on any object to create blocks around code that needs to be protected. Only a single block of code can be at any time inside a synchronous block for a specific object. And last but not least there is the notify() and wait() method the java object class implements which every other Java object is derived from directly or through other object classes. These two methods are a bit special since they do actually work together with the synchronous(obj) keyword. I haven't mastered this part yet fully but I had trouble to use those methods unless the code block in which they are called is protected with the synchonous(obj) keyword but reading suggests that this should not always be necessary. You probably have a point there. All I can say to that is that it seems cleaner to me to have a specific object interface be able to manage it's thread needs on its own by using the appropriate thread policy such as a threadpool with whatever limits seems useful, and let the proven OS implementation handle the distribution of the threads on whatever cores are available. It probably won't speed up things at all to do so, but I just like to have the idea of being in control if I feel the need is there. On the other hand, I have not to complain about how LabVIEW handles concurrent parallel execution so far. It seems just as capable to handle multiple code parts that can operate in parallel with the available CPU power as a Java program that uses several threads. So in the end there may be nothing really left but the feeling of having more control over the things, which is also sometimes a reason for me to implement certain things in C and incorporate them into LabVIEW as shared library. And because of that you respond to it!? I probably will try to take a look at your project. Always interesting to see new things, eventhough I'm not sure I will groke the whole picture.
  18. It's not the Open VI Reference Node in itself that blocks out the UI thread but the load operation of a VI hierarchy. This piece of code is very delicate as it needs to update large amounts of global tables that keep track of all loaded VIs, their relationship to each other, linking information and what else that I couldn't even dream up. So yes I'm sure the Open VI Reference and the application load operation do use the same LoadVIHierarchy() function and that this function is the UI blocking culprit. And I'm afraid calling a user VI as callback during this operation could be fairly costly. LabVIEW doesn't lock the UI thread for fun during this operation and invoking VIs may likely have a possible impact on the resources that this locking tries to protect, so that the locking may have to be released for the duration of the VI invoke. This could add up to quite a bit more than a few 100 us for each call. Anyhow even if you get this callback that tells you about the operation, be it a user event or a specific VI invoke operation, how do you suppose to give any feedback to the user if the UI thread is blocked?
  19. Why would you use string arrays, if the example you gave in your first post only contains numbers? Also I think your data structure design is highly flawed, if the data in the first post is all you want to sort. What you should look at is to use a 1D array of integers for your first array and an 1D array of cluster containing an integer and a float for your second array and the result.
  20. Shaun, the problem with this is that events are processed in the UI thread and the Open VI Reference blocks the UI thread very effectively. The reason for that is that it needs to update various global tables frequently during the load and can't have any other part of LabVIEW potentially messing with these tables while it is busy. And unlocking the UI thread for the duration of the event is a bad idea as it would inevitably add a significant overhead to the load operation that would be without doubt noticable. What I do is placing an animated GIF on the Splash screen. It runs even when the VI is not executing , but a user of the application doesn't notice that. However it is only a partly satisfactury solution since the GIF does animate properly in the IDE during the Open VI Reference call, but seems to stop in a built executable anyhow momentarily. I solve it by making the Main VI itself loading various components dynamically so that the load operation is really divided in several Open VI Reference operations. The Splash Screen simply has a string control that the main can send messages too, for status information, and waits until the main is deciding to open its Front Panel, indicating that it is ready to take over from now. I don't think that suggestion is thought out well. A separate SSH protocol library is of very little use, as it only implements the SSH protocol itself but wouldn't proliferate on the used SSL capabilities for other protocols. I would rather see a SSL support for TCP sockets that can be configured transparently. I tried to start with something like this in the network library that I posted quite some time ago here on lavag. The idea is that it is possible to setup SSL parameters and register them for a network socket, so it uses them automatically. This is because SSL really is meant to be transparent in terms of protocol handling to higher located protocols like HTTPS, which is really just the HTTP protocol transferred through a SSL secured socket.
  21. I think you should not only terminate on a Connection Closed by Peer error but just about any possible error except maybe the timeout error, although that is debatable too. And yes you would want to filter the Connection Closed by Peer error after the while loop, since that is a valid way to terminate the connection and not an error in terms of the user of your HTTP Get function. But error 56 is definitely a timeout error, Connection Closed by Peer is error code 66
  22. Good point that dataflow inherently solves some cases that futures might be used for in conventional languages.
  23. Well you may want it at some point but not necessarily in the same application and just as a copy of files on the harddisk. But you are right, whatever you want it is usually never as trivial as just shooting of the request and forgetting it. You want for instance usually be informed if one of the downloads didn't succeed and you also want a way to not have the library wait forever on never received data. Java threading can be a bit more powerful than just blindly spawning threads. If you make consequently use of the java.util.concurrent package, you can create very powerful systems that can employ various forms of multithreading with very little programming effort. At the core are so called executors that can have various characteristics such as single threads, bounded and unbounded threadpools and even scheduled variants of them. Especially thread pools are very handy, since creation and destruction of threads is a very expensive operation, specifically under Windows. By using threadpools you only have that penalty once and still can dynamically assign new "Tasks" to those threads. You are fully right here, but the actual threadpool configuration in LabVIEW is very static. There is a VI in vi.lib that you can use to configure the numbers of threads for each execution system, but this used to require a LabVIEW restart in order to make those changes effective. Not sure if that is still the case in the latest LabVIEW versions. LabVIEW itself implements still some sort of cooperative multithreading on top of the OS multithread support just as it always did even before LabVIEW got OS thread support in 5.0 or 5.1. So I would guess that the situation is not necessarily as bad, since you can run multiple VIs in the same execution system and LabVIEW will distribute the available threads on the actual LabVIEW clumps, as they call the indivisible code sequences they identify and schedule in their homebrew cooperative multitasking system. You do have to be carefull however about blocking functions that cause a switch to the UI thread as that could completely defy the purpose of any architecture trying to implement parallel code execution. Not sure how the new Call Asynchronous fits into this. I would hope they retained all the advantages of the synchonous Call By Reference but added some way of actually executing the according call in some parallel thread like system, much like the Run Method does. But I haven't looked into that yet Edit: I just read up on it a bit, and the Asynchronous Call by Reference looks very much like a Future in itself. No need to employ LVOOP for it, jupiieee! I never intended to say it would be useless, just not as generic and powerfull as the Java version. Lol, I so much missed the hooded smiley here that is sometimes available in other boards. Should have scrolled in the list instead of assuming it isn't there. I knew I was poking some peoples cookies with this, and that is part of the reason I even put it there.
  24. According to the manual it uses a RS-485 interface and supports CompoWay/F, SYSWAY, or Modbus protocol. What Modbus registers to read for the setpoint, current value and other values and how to interpret them should be mentioned in some sort of programming manual or such. The Operation Manual on the site at least doesn't document anything about that.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.