Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. In general unlike the scripting stuff, this are more general methods and properties added for various reasons during the development for LabVIEW, usually to allow a certain LabVIEW tool to do something. Those methods while there, do not receive the same attentions in terms of maintenance, unit test coverage and of course documentation. They can and sometimes do break under different than the intended use cases, are left outside in the cold, when NI creates a new LabVIEW version and simply are the unloved stepchild in terms of care and maintenance in general. Whoever added them for their specific tool is responsible to make sure they keep working in newer releases but it's very likely that a developer adding a new feature to LabVIEW isn't aware about some of them in the first place and in the course breaks them horribly and because of the limited unit test coverage such a breakage may not get discovered. So in conclusion, play with them, have fun and enjoy the feeling to have a privileged position in terms of inside view into LabVIEW things, but DON'T use them for anything you want to be able to work across new LabVIEW versions without breaking your code, especially if you plan to develop something that might end up being used by other people than yourself.
  2. Let me comment on some of these things Full disclosure: I'm currently maintaining LuaVIEW and I'm the lone LabPython programmer, who did this in the first place to find out how the script node could be used by someone outside of NI. And once I had that, I realized that wrapping those functions into VIs would allow real dynamic access to the Python engine. At about the same time my collegue started to develop LuaVIEW for a rather large customer project. We had quite some fun arguing over if Lua or Python was the better language. While that view is a personal taste it is clear that Lua is a very much self contained and extremely compact scripting environment that is much easier to embed in other systems like LabVIEW. In fact Python, at that time at least, had no real intentions to actively support embedding of its engine into other environments. The API was there and it could be done, but the Python developer community was in general unresponsive to any suggestions of improvements in that part. Unlike LabPython LuaVIEW does NOT have a script node interface but only a VI interface that not only allows but in fact requires to pass a script at runtime. While LuaVIEW doesn't do that out of the box currently it would be not a to complicated project to develop that. But I'm not convinced about the need for that. Aside that LuaVIEW is free for non commercial use, the initial purchase costs are usually the smallest parts of a projects cost. Any decent software programmer will incur the license costs of a commercial LuaVIEW license in two days of programming an alternative solution. Two days is very little time for such a thing as a scripting engine.
  3. If you think for a few seconds about it you will recognize that this is true. When a path is passed in, LabVIEW has at every call to verify that the path has not changed in respect to the last call. That is not a lot of CPU cycles for a single call but can add up if you call many Call Library Nodes like that especially in loops. So if the dll name doesn't really change, it's a lot better to use the library name in the configuration dialog, as there LabVIEW only will evaluate the path once at load time and afterwards never again. If it wouldn't do this check the performance of the Call Library node would be abominable bad, since loading and unloading of DLLs is really a performance killer, where this code comparison is just a micro delay in comparison. If I would have to have a guess, using a not changing diagram path adds up maybe 100us, maybe a bit more, but compare that to the overhead of the Call Library node itself which is in the range of single us. Comparison of paths on equality is the most expensive comparison operation, as you only can determine equality if you have compared every single element and character in them. Unequality has on average half the execution time, since you can break out of the comparison at the first occurrence of a difference.
  4. I have implemented a system based on TCP communication in a similar way than the STM Reference Design from NI. Technically however the CRIO (CompactFieldpoint) is the server , and the PC(s) are the client. This has worked out quite well for isolated systems, meaning I haven't used it with a multitude of RT controllers on the same subnet. Instead what I have is typically one or two RT controller, that don't really talk to each other and one or more operator stations and touch panel monitors that communicate to the controller(s) over this link. The communication protocol allows of course data transfer of the underlying tag based system, similar to the CVT Reference Design, resetting and shutting down the controller, and also updating the CVT tag configuration. Since it only operates on isolated subnets I have not implemented any form of authentication into the protocol itself. NSVs, or their bigger brother the Network Streams are interesting when quickly putting together a systems, but I like to have more control over how the system is configured and operating, and have even created a small Android client that can communicate directly through my protocol, something you simply can't do with proprietary closed source protocols.
  5. Well the rotation can be handled by some Transpose Array I would assume. It's not a big problem right now. And if you create an U8 greyscale IMAQ image, you better connect an U8 data array to the U8 input of your IMAQ ArryaToImage function. But why did you say your numeric values are integers between 0 and 255? The Z value in the intensity graph only shows a 0 and 1 in the cursor display. So I really very much doubt that your values are between 0 and 255 and they are definitely not U8 but rather floating point values. So what is the minimum and maximum value in your 2D array?
  6. Of course you should also select a compatible Image Type for the IMAQ Create.vi. If your statement is true that the values would be between 0 and 255, I would expect that the image type Grayscale (U8) should be working.
  7. Well the IMAQ ArrayToImage.vi of course has several inputs. Depending on the input you use, you may need to scale the intensity data to get a reasonable result. Not having seen the data in your ASCII file yet I can't really say much as to what scaling you may need. But assuming that you have for instance integer values between 0-255 you should connect the array to U8 input, for value between 0 to 65635 you should connect it to the U16 array input and so on. This VI will only create monochrome images but I assume that is all that you need, since the intensity graph only really displays single plane data too.
  8. Sorry I mistyped there, it is 3.7.13. Are you adding any LabVIEW specific C wrapper code to the DLL? Because if you don't or separate that code into its own DLL, it is really just a drop in replacement of the sqlite3.dll. The GCC compiler itself is quite unlikely as it is in itself quite agnostic of the underlaying target platform. My guess would be the MingW C runtime libraries and here specifically the startup stub that wraps your DLLMain() function. It may be possible to avoid that by selecting a different C runtime option or target subsystem. Not sure if the MingW toolchain provides different subsystem options for DLL targets. I'm not really sure what build toolchain they use at sqlite themselves for the released binaries, but I have some doubt that Richard would be using anything not GCC based.
  9. I can't speak for NI and don't know that document, but I'm 99.99% sure that it has a big watermark across the front page, stating: "Company Confidential". Quite possibly this watermark is repeated on every single page. Besides of that it would be of no significant use to us LabVIEW users, as it refers among other things, to various places in the LabVIEW C++ source code, where specific provisions need to be added for the new node, the internal daily unit test framework run that needs to be enhanced to test the new node, the fact that the documentation department needs to be informed about writing a new help section for this node, and probably a few other things, that leave everyone not working in the LabVIEW team flabbergasted. And that document most likely isn't the most popular bedtime lecture of any LabVIEW development team member either. In other words, if you want to see this document you will need to apply with NI as LabVIEW developer and hope to be accepted.
  10. Things aren't usually as simple as they seem, or as some tagline says that I read here or on the NI forum: "If a problem seems simple, I haven't understood the problem yet." NI can't just take an existing property node and change it's behavior without a lot of thought. Otherwise applications that have worked in previous versions suddenly start to do very weird things after upgrading to a new version. So they have to pretty much leave property nodes alone as soon as they let them out in the wild. Chances are that there was a real brainstorming session about exactly this when they added the scrollbar to the plot legend and that several smart heads in the team came up with several reasons why changing the "Number of Rows" property to reduce the number of plots is not a good idea in that case. And they therefore added the "Legend:Plot Minimum" property to the graph, which should do what you want, if I understand your problem correctly. And that an application engineer doesn't always know about every possible property out there is not that amazing either. They can't spend 1 hour on every support call, or their manager is starting to breath down their neck about why they have such a low number of support calls. And since the enhanced plot legend is a new feature in 2011, it is not very likely that any of the other AEs in at least 50 cubicles distance would know the answer either right out of their mind. I have to admit to have troubles to imagine a mechanism to allow that much of customization of controls, without opening up the LabVIEW object handling on C++ API niveau, with all the nasty chances of NULL pointer exceptions, and out of bounds memory accesses, as well as a versioning nightmare if you want to have these controls survive the move from LabVIEW 20xx to 20xx + 1. And it would be definitely even more complex than Xcontrols. LabVIEW had in its early days just such an API, which exposed the front panel object event dispatch table to external code. But this object dispatch table had to be modified with every new version of LabVIEW and made therefore the idea of external controls based on this quite useless, since they wouldn't have survived an upgrade to a new LabVIEW version. So that interface was left in limbo in LabVIEW 4 and entirely removed around LabVIEW 5.
  11. I thought, if the source code doesn't show these, we have to look into the binary. But there seems nowhere any reference to these three APIs. I've used the DLL Checker, a LabVIEW VI listing the import section of a DLL, Dependency Walker and even looked directly at the disassembly of the DLL, but I can find nowhere any reference to any of these three APIs. So which DLL are you looking at, the official 3.7.14 from the sqlite site? How do you determine that these APIs are required? Attached is the report produced by the LV 2010 DLL Checker for the latest sqlite3.dll from their site. Note that you will probably have to tackle the stubbed imports too, or at least check that there is no chance for the code to run through these code paths on an embedded system. sqllite3.report.txt
  12. Windows Taskbar handling

  13. I only understand chinese here. Sorry but what are you trying to do here??? An ASCII file that contains the source to an intensity graph? Typecasting Intensity Graph to a Image Display.ctl?? That are two entirely different types of data, the Intensity graph being a 2D array of numbers being displayed in a particular way, while the Image Display.ctl is the IMAQ control to display bitmap data. Typecasting only works for data that is interchangable from one format to the other without changing the memory content, but that is not a possibility between these two data types. Most likely what you want to do is to take your Intensity Display 2D data and display it in the IMAQ display in a similar way. But that requires a conversion, not a typecast. One possibility for this would be to use the IMAQ ArrayToImage.vi function from the IMAQ Toolkit. I would assume that you have that Toolkit installed if you have the Image Display.ctl in your palettes.
  14. Unless your system changed in the meantime. Windows Update anyone? A new driver installation? A harddisk cleanup or replacement? I mean a solution like that is probably ok for your specific private application, but definitely not an option for an NI supplied example, which is supposed to do the "right"TM thing.
  15. Well there is always possibility for disagreement with a certain implementation, and I'm not saying the way they are now is without real quirks, but graphs are indeed a very complicated beast. From early days when graphs had much less options (and property nodes were just a very limited possibility) one had to do sometimes rather adventurous things to get a certain feature visually. What I remember from there is that there are sometimes totally contradicting requirements for a certain operation. You couldn't implement one thing without sacrificing something else, and that was not only because of the limited control you had with property nodes and all, but often also a fundamental problem. On one side you do not want a popup menu with 300 options to select from, on the other hand you want it as flexible as possible without need to go into complicated property node voodoo. And I'm convinced there is no way to get those two requirements fulfilled. Also when adding a new option to a control like a graph, there will be always at least one corner case that needs yet again special handling, and with nowadays complicated graph quite likely a few dozen of them, and they are so easy to miss even with very involved testing and user involvement.
  16. I'm not sure what you are asking here. Do you want to know why NI didn't make a yellow node of this function? If that is your question I can think of a few reasons. It's much easier to add an "undocumented" VI in vi.lib calling back into LabVIEW than adding a new node to LabVIEW. A new node needs an icon, help and several more resources embedded in the executable. That is a lot of work in terms of extra work, testing and verifying. A VI is added easily to vi.lib, can be adapted, tested, modified and documented by non LabVIEW core developers, and if the need should arise and it didn't get documented in the meantime, changed, removed and whatever else. Adding a private export of a C function only requires the expertise of the LabVIEW core developer who works on that code, not the expertise of several people working on various parts of the whole LabVIEW core. A developer of a new tool in LabVIEW finding to need access to a specific internal data structure can either file a new proposal to add an (undocumented) node, and wait until the powers to be have decided that this is a good idea, developer resources have been assigned to work on that, and testing and documentation has had their say, or simply add that export to the exported LabVIEW functions, create a private VI to access it and be done. Sure such functionality might be an interesting candidate to turn into a node at some point, but chances are that nobody will look back once it's done and working. So it's a shortcut to add functionality to LabVIEW that a new tool might require, without having to go through a hole bunch of modifications of the LabVIEW core itself. Since the password protection is now seriously broken and can't be used to prevent people to go into such VIs to shoot their own foot, they probably will change policies in the future and move a lot more into the direction of new (possibly undocumented) nodes, to expose such functionality. Undocumented, because once a function has been documented it can't really be removed anymore or even just modified, without a lot of hassle.
  17. Believe me, you do not want to thinker with that. It's deep in the DCOM internas and after a few more hours debugging through disassembly even into the Windows interna I've figured it out. It was a combination of comctrl32 side-by-side assembly versioning and DCOM marshalling because of apartment threading limitations.caused by the fact that DCOM is still based on OLE and it's Windows 3.1 heritage. So I am now able to get some thumbbar buttons to draw and even return user events to LabVIEW. I'm going to do a little more cleanup and will then post the VI library. I think saying that this should be an IDE feature is a waaaaay to strong statement. It's a funny Windows gadget much like toolbar ribbons, but it's implementation has a few limitations and it's API is quite awkward. You wouldn't want to deal with the Taskbar API as is from a LabVIEW application as there are simply to many things you can do wrong to mess up for good. I'm trying to hide some of that complexity in the LabVIEW library I'm currently working on, but I'm not sure it will be possible to make it idiot proof, and as we all know engineers are even worse . Another point I have read on some blog about the Windows Taskbar excitement from many users applies here as well. You should NOT implement Windows taskbar functionality into your application, just because you can!! It's a decision that needs to be thought out seriously and implemented well, otherwise it is more annoying for the user than useful. The thumbbar specifically are only really useful for operations that do not require any activation of the application window in question. So what could this be used for in the LabVIEW project window? Starting a VI? Starting or stopping a compile build? Maybe the cancelation of a stop build but anything else needs more context such as which VI to start, or which of the potentially several target builds to start, etc, so it needs activation of the according window and context specific selection for the action and then the buttons make absolutely no sense. Clicking the thumbnail icon to activate the window and do whatever is needed to do is much more intuitive, than selecting a possibly obscure thumbbar button, that switches over to a dialog or the main VI in question to require the user to select in more detail what he wants to do. Attached is a first version of the library. Documentation is a little scarce at this stage, but it should be possible to figure out the most important functionality by looking at the two examples. And before anyone complains that the incuded DLL can't be loaded on his machine. This DLL was compiled using Visual C 2005 and requires therefore the MS C runtime library version 8.0.x. The attached vcruntime8.0.zip file contains both installers for the 32 bit and 64 bit versions of the MS C redistributable runtime libraries. Install whatever version your LabVIEW system has in order for the DLL to work. These installers are the most recent VC 8.0 runtime libraries officially available from Microsoft. lvtaskbar.zip vcruntime8.0.zip
  18. Well, out of curiosity I did spend a few hours on this. But it seems Windows is effectively denying every cooperation in making this functionality work from within LabVIEW. I can create a small Windows executable that uses the functions to assign an imagelist and the thumbbar definitions for the tumbbar buttons just fine, and I can use most of the other TaskBarList methods from within LabVIEW easily such as the Progress Bar functionality but any attempt to set the imagelist for the buttons from within LabVIEW fails with a very useless E_FAIL error message. Not sure what that would be really.
  19. Is it? And what did you learn from this awesome look behind the curtains? Yes there is a function you can use to translate an error code into an error string. But the VI that you looked at does that already for you without the need to bother about correct calling convention, parameter type setup and a few other nasty C details. Not sure I see the awesomeness here, other than the desire to feed your own curiosity and find more ways to shoot in your foot.
  20. What the name says: LabVIEW. Basically LabVIEW exports a lot of so called manager functions that can be called from C code, such as when you write a DLL (or shared library on non Windows systems). A lot of those manager functions are described in the External Code Reference Manual which comes as part of the help files in your LabVIEW installation. The LabVIEW library name is a special keyword, that tells the Call Library Node to link to whatever the current LabVIEW execution kernel is (LabVIEW.exe in the IDE, lvrt.dll in a built app). Note the case of the letters, which needs to match exactly with the official spelling. And before you ask, LabVIEW exports more functions than are described in the manual such as the one you found. Some are used internally by LabVIEW VIs but they are usually password protected. Sometimes one slips through the cracks. Most of those undocumented functions make no sense to be called outside of a very specific context, and some are rather harmful if not called in a very specific way. A lot of them are really only exported so that LabVIEW can itself use them as a sort callback and they make not really any sense to be called from a VI diagram.
  21. Well easy! But the hardest part of getting the SCC Provider Interface figured out is indeed done. Interfacing SVN through the command line interface isn't too complicated but I was at some point looking to integrate it completely as DLL and that is quite a different story. Since it also is a potential maintenance nightmare I abandoned that approach completely. SVN does normally guarantee backwards compatibility for the SVN command line interface, but no such guarantee exists for the binary API.
  22. If you follow that thread and go to the last post, you can see that Ton has actually already released both the provider as well as the API in the Code Repository. So you just need to download it and give it a testdrive.
  23. Daklu, I'm not trying to be difficult here but rather would like to understand how a LVOOP singleton would have much less dependency tree effect here. Yes Obtain and Release would be two different method VIs and the data dependency between these two would be through the LVOOP object wire instead of encapsulated in the single FGV. But that would decouple only the Obtain and Release operation as far as the hierarchy tree is concerned, not the fact that you use this object in various, possibly very loosely coupled clients. I say loosely coupled since they obviously have at least one common dependency, namely the protected resource in the singleton. And while easy extensibility is always nice to have, I'm not sure I see much possibility for that in such singleton objects. Would you care to elaborate on the dependency tree effect in this specific case and maybe also give an example of a desirable extension that would be much harder to add to the FGV than the LVOOP singleton?
  24. Well an FGV is the most trivial solution in terms of needing to code. It's not as explicit as a specific semaphore around everything and not as OOP as a true singleton class, but in terms of LabVIEW programming, something which is truely tried and proven. I would also think that it is probably the most performent solution, as the locking around non-reentrant VIs is a fully inherent operation of LabVIEW's execution scheduling and I doubt that explicit semaphore calls can be as quick as this. Also for me it is a natural choice since I use them often, even when the singleton functionality isn't an advantage but a liability, simply because I can whip them out in a short time, control everything I want and don't need to dig into how LVOOP does things. And reading about people complaints of unstable LabVIEW IDEs, when used with LVOOP doesn't exactly make me want to run for it either. I know this sounds like an excuse, but fact is that I have apparently trained myself to use LabVIEW in a way that exposes very little instability, unless I'm tinkering with DLLs, and especially self written DLLs during debug time, but that is something I can't possibly blame LabVIEW for.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.