-
Posts
3,903 -
Joined
-
Last visited
-
Days Won
269
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Quit LabVIEW existed long before LabVIEW allowed to build executables. As such it's intention was for sure not to be used for executables in the first place. And exactly because of that there was a way to shutdown the entire IDE, as any experiments were executed in the dev system, since that was the only way to execute VIs. So if you wanted to build an experiment that was shutting itself down completely after execution you had to have a way to exit the dev environment too. With the advent of the application builder and the ability to create executabls, the Quit LabVIEW node got pretty much obsolete but wasn't a big enough legacy burden to be really removed. The root window is a hidden window that handles all the message interaction with the operating system. That is a LabVIEW speciality to implement a Mac OS functionality where system OS messages are always sent to the root loop of a process. So they created a hidden root window which does this root loop handling on Windows, such that the rest of LabVIEW messaging could stay pretty unchanged from the code used for Mac OS There is a ini setting hideRootWindow=True that you can add to your executable ini file to make the root window button not appear in the taskbar, although I'm not sure that should be still necessary in recent LabVIEW versions. As to your problem, it pretty much looks to me like a corrupted installation of the runtime system or some driver you are accessing in your appllication. Do you access any hardware in your app? Or ActiveX components or DLLs? Any of these could cause this problem if you don't properly close any an all resources that you open up during application lifetime. LabVIEW has no way to force a DLL or ActiveX out of memory if that DLL thinks it still wants to stay in memory because a resource it provides hasn't been closed properly.
-
I'm not sure where you want to go with this. Speculating about intentions or not when there is a specific result that has been there since at least LabVIEW 2.0 is IMHO a moot point. To me a string always looked like a Meander but not strictly as it morpsh into a somewhat different pattern in vertical lines. Would it worry me? No absolutely not, as long as it looks different enough tho anything else to allow me to distinguish it from other datatypes. Ever looked at clusters and there "non-flat" clusters? flat clusters are clusters that can be typecasted and they are brown, while non-flat clusters can not be typecasted but only flattened and they are pink too. And your format you claim was NIs real intention has no merits, since the borders are to small to be drawn. The line itself is already only one pixel and a pixel is still the smallest which can be drawn on modern screens, so that your small borders are simply not possible to be drawn. So intention or not it's not what LabVIEW does and therefore any discussion about what the intention may have been is pretty useless.
-
User refnums are for instance used by the DOM XML Library. They are indeed not documented, but not so much a LabVIEW API to call as much more a combination of an external shared library with a specific API interface and a text document describing that API to the LabVIEW object manager, such that the DLL gets linked to properly when you use property and method nodes on the according user refnum. It's a powerful tool to extend LabVIEW with libraries without much of LabVIEW VIs involved. And it works from LabVIEW 7 until 2011 without real issues, but there is no guarantee that it could not be necked in a coming version. While it's theoretically imaginable to interface an SQL database through a script node I think it is highly unpractical. The script node at least in the version as is documented in lvsnapi.h and which is the only information I have available is meant to work on a local session context to the particular script node. Much like your sqlite connection, and passing this connection around to various script nodes is highly complicated and also delivers no real benefit, since the script contents is static. You can't change the script text at runtime, not even with VI scripting as that is considered an edit operation that can only occur at edit time. So you end up writing in stone your database interface which is very seldom how you want to access databases. At least some of the query parameters are usually dynamic and while you could pass that into the script node as parameter, your script node needs to be able to interpret the entire script, so you need some parser too. The script node interface simply receives the text, and the list of parameters and has to do something with it. Also the supported parameter types are somewhat limited. So you end up either with a script node that can only contain the SQL text you pass to a method, and does always implement a specific SQL statement sequence or you need to add some intermediate parser that gives you more flexibility in what you can put into the scriptnode besides the SQL statements.
-
Personally I find subroutine priority not an issue, if applied sparingly and very specifically. But once someone starts to apply this to just about any function in a library he made that library a clear trashcan candidate in my eyes. Blind optimization like this is about 10 times worse than no optimization at all. If there are specific functions, like in this case a function that might retrieve a single data item from a result set and therefore is called potentially 1000 of times in a normal operation, subroutine priority may make sense, if you know for sure that this function is fast and uninterruptable. With fast I mean that the function should not go through an entire hierarchy of driver layers and what else to do its task and it should not involve any operation that may be blocked or interruptable such as any IO operation like disk or even worse network access. If you know that this function accesses already prepared data stored in the database refnum or result set refnum, then a subroutine VI is a responsible choice, but otherwise it is just a disaster waiting to happen. Also consider that subroutine VIs are not debuggable anymore so you really don't want to have that through your entire LabVIEW VI library. Applying subroutine priority to VIs that are not for sure executed very repeatably in loops is lazyness and wrong applied optimization, with nasty costs such as making the library hard to debug and potentially locking yourself completely up. As to fixing your threading issue with retrieving error information, my choice here would be to write a C wrapper around the sqlite DLL that returns the error code as function return value and since I'm already busy, would also take care of things like LabVIEW friendly function parameters where necessary, semaphore locking of connections and other refnums where useful and even the dynamic loading of selectable sqlite DLLs if that would be such a dear topic to me. And I might create a solution based on user refnums, so that the entire access to the interface is done through Property and Method Nodes.
-
What exactly is SuperSecretPrivateSpecialStuff for?
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
Maybe start reading this. I can't say I understand it on more than a very superficial level. Personally I think the result of this node is likely a result of exposing some interna in order to help debugging the DFIR and LLVM functionality without always having to dig into the C++ source code debugger itself. Following memory dumps from the C source debugger is immensely frustrating and error prone so creating a possibility to see what the DFIR algorithm created in order to debug optimization problems is so much more easy. Without access to the GenAPI and the LLVM integration into it, the result is however likely not very useful for someone. By the way is the user name "xxx" in your dump the result of an after the fact edit of the dump to hide your identity or the result of running such exercises in a special user account to avoid possible interferences with the rest of your system by such exercises? For someone in the knows the binary data in the beginning could also contain interesting clues. -
If you specify the path in the configuration dialog, the DLL is loaded at load time. If you specify it through the path parameter it is loaded at runtime.
-
For protection from relays you should definitely look at reverse diodes across the coil. When you switch of a coil it always produces a reverse EM voltage and that can be a high multiple of the operating voltage. So you end up easily with over 100 V reverse voltage with a 12 V relays. I believe it's this revers EM voltage which could cause the effects you describe, if it doesn't destroy the driver transistor first. Additional protection could be achieved with ferrit beads or ferrit filters that filter the high frequency components that are created when switching of the relay and the reverse EM voltage is suddenly applied. Even-though the protection diode will limit that voltage and allow the current to be dissipated over time, there still exists high frequency components from the switching that can travel through the circuitry and into your computer unless you put some filters in that path. Also important is of course a solid ground plane. If you force those relay currents through small traces and don't at least connect the grounds in some star formation, you can end up with huge transient ground voltage differences during the switching.
-
Well even in Open Source projects it's often so that the developers have created helper tools that they may not distribute openly. And no, as long as you are the developer of the code and don't distribute the result there is no Open Source license which obligates you to distribute the source, not even GPL. In the case of commercial applications it's a total fantasy to expect or even hope to get all the internal tools of the software manufacturer too. That would mean among other things also license generators and what else, and you know where that would lead.
-
Create your own Generic VI (like Randomize 1D Array)
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
Well how do you think did they do the first controls of a new data type? Probably something like handcoding with specially compiled LabVIEW that has special tools included. And about how the compiler gets confused, I'm sure you will never hear a detailed info. For one thing this is NI internal, and for another thing unless you understand a project like LLVM from ground up, it would make no sense to you if they would give you a more technical explanation. Go study LLVM and once you understand it, you may be qualified to understand at least in parts what all might go wrong there. -
I can only echos slacter's recommendation. I have never used the Quit LabVIEW primitive in my 20 years of LabVIEW programming, other than to try it out. In the development environment I don't want to quit LabVIEW normally anyhow, and in a built application, the executable terminates as soon as you close the last front panel. So this is usually what I do as last thing in my main VI after all loops have exited.
-
What exactly is SuperSecretPrivateSpecialStuff for?
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
I'm pretty sure it is what the Microprocessor C Development Toolkit uses and as such this function will not do much useful if you don't have a valid license for that toolkit. Remember that LabVIEW has a license management system (at least under Windows) and that much of this functionality is protected through that. As such if you have the license for the Toolkit you have the functionality much more conveniently available in the Tools menu, and if you don't have the license, this method won't do anything but give an error. -
All the documentation for that is in the C source code to the LabPython DLL in the sourceforge repository. But note that you can't get away without writing a very specific DLL. And LabPython does very tricky dynamic loading to allow separation of the actual LabPython core functionality from the scriptnode plugin. Without that you get into trouble since the LabVIEW VIs wouldn't search in the script node plugin for the DLL.
-
Now, 25ns really amazes me! That for a loop that needs to compare several dozen characters. Probably optimized to operate on 4 bytes integers instead of on individual characters. Or maybe LabVIEW nowadays does use dirty flags for its data handles but that seems rather unlikely. An Always Data Copy in the wire to the path before passed to the CLN should eliminate any cached dirty flags. And that LVOOP might be an important part of the picture, would not surprise me at all.
-
Always nice to have real numbers I guess my estimations are still based on my times when working with 66MHz i486 CPUs. A modern Dual Core should hopefully smash that into pieces of course.
-
What exactly is SuperSecretPrivateSpecialStuff for?
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
Why? This color much more clearly states the situation about those nodes, than your suggestion . It screams at the user: go away, don't look at me, don't even think about it!!!!! Besides the color you suggest is used in some of my company internal libraries already so I have a first use right on that! -
What exactly is SuperSecretPrivateSpecialStuff for?
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
In general unlike the scripting stuff, this are more general methods and properties added for various reasons during the development for LabVIEW, usually to allow a certain LabVIEW tool to do something. Those methods while there, do not receive the same attentions in terms of maintenance, unit test coverage and of course documentation. They can and sometimes do break under different than the intended use cases, are left outside in the cold, when NI creates a new LabVIEW version and simply are the unloved stepchild in terms of care and maintenance in general. Whoever added them for their specific tool is responsible to make sure they keep working in newer releases but it's very likely that a developer adding a new feature to LabVIEW isn't aware about some of them in the first place and in the course breaks them horribly and because of the limited unit test coverage such a breakage may not get discovered. So in conclusion, play with them, have fun and enjoy the feeling to have a privileged position in terms of inside view into LabVIEW things, but DON'T use them for anything you want to be able to work across new LabVIEW versions without breaking your code, especially if you plan to develop something that might end up being used by other people than yourself. -
Let me comment on some of these things Full disclosure: I'm currently maintaining LuaVIEW and I'm the lone LabPython programmer, who did this in the first place to find out how the script node could be used by someone outside of NI. And once I had that, I realized that wrapping those functions into VIs would allow real dynamic access to the Python engine. At about the same time my collegue started to develop LuaVIEW for a rather large customer project. We had quite some fun arguing over if Lua or Python was the better language. While that view is a personal taste it is clear that Lua is a very much self contained and extremely compact scripting environment that is much easier to embed in other systems like LabVIEW. In fact Python, at that time at least, had no real intentions to actively support embedding of its engine into other environments. The API was there and it could be done, but the Python developer community was in general unresponsive to any suggestions of improvements in that part. Unlike LabPython LuaVIEW does NOT have a script node interface but only a VI interface that not only allows but in fact requires to pass a script at runtime. While LuaVIEW doesn't do that out of the box currently it would be not a to complicated project to develop that. But I'm not convinced about the need for that. Aside that LuaVIEW is free for non commercial use, the initial purchase costs are usually the smallest parts of a projects cost. Any decent software programmer will incur the license costs of a commercial LuaVIEW license in two days of programming an alternative solution. Two days is very little time for such a thing as a scripting engine.
-
If you think for a few seconds about it you will recognize that this is true. When a path is passed in, LabVIEW has at every call to verify that the path has not changed in respect to the last call. That is not a lot of CPU cycles for a single call but can add up if you call many Call Library Nodes like that especially in loops. So if the dll name doesn't really change, it's a lot better to use the library name in the configuration dialog, as there LabVIEW only will evaluate the path once at load time and afterwards never again. If it wouldn't do this check the performance of the Call Library node would be abominable bad, since loading and unloading of DLLs is really a performance killer, where this code comparison is just a micro delay in comparison. If I would have to have a guess, using a not changing diagram path adds up maybe 100us, maybe a bit more, but compare that to the overhead of the Call Library node itself which is in the range of single us. Comparison of paths on equality is the most expensive comparison operation, as you only can determine equality if you have compared every single element and character in them. Unequality has on average half the execution time, since you can break out of the comparison at the first occurrence of a difference.
-
I have implemented a system based on TCP communication in a similar way than the STM Reference Design from NI. Technically however the CRIO (CompactFieldpoint) is the server , and the PC(s) are the client. This has worked out quite well for isolated systems, meaning I haven't used it with a multitude of RT controllers on the same subnet. Instead what I have is typically one or two RT controller, that don't really talk to each other and one or more operator stations and touch panel monitors that communicate to the controller(s) over this link. The communication protocol allows of course data transfer of the underlying tag based system, similar to the CVT Reference Design, resetting and shutting down the controller, and also updating the CVT tag configuration. Since it only operates on isolated subnets I have not implemented any form of authentication into the protocol itself. NSVs, or their bigger brother the Network Streams are interesting when quickly putting together a systems, but I like to have more control over how the system is configured and operating, and have even created a small Android client that can communicate directly through my protocol, something you simply can't do with proprietary closed source protocols.
- 5 replies
-
- client
- communications
-
(and 3 more)
Tagged with:
-
Well the rotation can be handled by some Transpose Array I would assume. It's not a big problem right now. And if you create an U8 greyscale IMAQ image, you better connect an U8 data array to the U8 input of your IMAQ ArryaToImage function. But why did you say your numeric values are integers between 0 and 255? The Z value in the intensity graph only shows a 0 and 1 in the cursor display. So I really very much doubt that your values are between 0 and 255 and they are definitely not U8 but rather floating point values. So what is the minimum and maximum value in your 2D array?
-
Of course you should also select a compatible Image Type for the IMAQ Create.vi. If your statement is true that the values would be between 0 and 255, I would expect that the image type Grayscale (U8) should be working.
-
Well the IMAQ ArrayToImage.vi of course has several inputs. Depending on the input you use, you may need to scale the intensity data to get a reasonable result. Not having seen the data in your ASCII file yet I can't really say much as to what scaling you may need. But assuming that you have for instance integer values between 0-255 you should connect the array to U8 input, for value between 0 to 65635 you should connect it to the U16 array input and so on. This VI will only create monochrome images but I assume that is all that you need, since the intensity graph only really displays single plane data too.
-
Sorry I mistyped there, it is 3.7.13. Are you adding any LabVIEW specific C wrapper code to the DLL? Because if you don't or separate that code into its own DLL, it is really just a drop in replacement of the sqlite3.dll. The GCC compiler itself is quite unlikely as it is in itself quite agnostic of the underlaying target platform. My guess would be the MingW C runtime libraries and here specifically the startup stub that wraps your DLLMain() function. It may be possible to avoid that by selecting a different C runtime option or target subsystem. Not sure if the MingW toolchain provides different subsystem options for DLL targets. I'm not really sure what build toolchain they use at sqlite themselves for the released binaries, but I have some doubt that Richard would be using anything not GCC based.
-
Call Library node calling "LabVIEW"
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
I can't speak for NI and don't know that document, but I'm 99.99% sure that it has a big watermark across the front page, stating: "Company Confidential". Quite possibly this watermark is repeated on every single page. Besides of that it would be of no significant use to us LabVIEW users, as it refers among other things, to various places in the LabVIEW C++ source code, where specific provisions need to be added for the new node, the internal daily unit test framework run that needs to be enhanced to test the new node, the fact that the documentation department needs to be informed about writing a new help section for this node, and probably a few other things, that leave everyone not working in the LabVIEW team flabbergasted. And that document most likely isn't the most popular bedtime lecture of any LabVIEW development team member either. In other words, if you want to see this document you will need to apply with NI as LabVIEW developer and hope to be accepted. -
Things aren't usually as simple as they seem, or as some tagline says that I read here or on the NI forum: "If a problem seems simple, I haven't understood the problem yet." NI can't just take an existing property node and change it's behavior without a lot of thought. Otherwise applications that have worked in previous versions suddenly start to do very weird things after upgrading to a new version. So they have to pretty much leave property nodes alone as soon as they let them out in the wild. Chances are that there was a real brainstorming session about exactly this when they added the scrollbar to the plot legend and that several smart heads in the team came up with several reasons why changing the "Number of Rows" property to reduce the number of plots is not a good idea in that case. And they therefore added the "Legend:Plot Minimum" property to the graph, which should do what you want, if I understand your problem correctly. And that an application engineer doesn't always know about every possible property out there is not that amazing either. They can't spend 1 hour on every support call, or their manager is starting to breath down their neck about why they have such a low number of support calls. And since the enhanced plot legend is a new feature in 2011, it is not very likely that any of the other AEs in at least 50 cubicles distance would know the answer either right out of their mind. I have to admit to have troubles to imagine a mechanism to allow that much of customization of controls, without opening up the LabVIEW object handling on C++ API niveau, with all the nasty chances of NULL pointer exceptions, and out of bounds memory accesses, as well as a versioning nightmare if you want to have these controls survive the move from LabVIEW 20xx to 20xx + 1. And it would be definitely even more complex than Xcontrols. LabVIEW had in its early days just such an API, which exposed the front panel object event dispatch table to external code. But this object dispatch table had to be modified with every new version of LabVIEW and made therefore the idea of external controls based on this quite useless, since they wouldn't have survived an upgrade to a new LabVIEW version. So that interface was left in limbo in LabVIEW 4 and entirely removed around LabVIEW 5.