-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Create your own Generic VI (like Randomize 1D Array)
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
Mathematics is tricky. The round to zero is possible with the floor() operation but is not sufficient. Since Randomize() produces values between 0 and 1 with the ends inclusive, you still can end up with a few potential values outside the range every now and then. Also just as a trivia there are in fact two round() used in mathematics, one rounds 0.5 always up the other towards even numbers. The second is meant to reduce the effect of rounding errors in combined mathematic calculations with multiple roundings occuring. -
Well my Dutch is worse . I'm from origin Swiss, with Swiss German, then had to learn French, of which I remember very little, then ventured into English and finally Dutch. So while I'm still fairly good in German, a bit less in English and Dutch, the mix of all these three hasn't helped the grammatical correctness of any of them.
-
How exact is exactly? If you talk about microseconds you better employ an RT system or some hardware timer. If seconds is enough a little (abortable) wait loop is all that you need.
-
Using a Method node set to operate on a VI reference and executing the FP.Close method, after everything else in the VI has finished?
-
Thanks a lot . And if anyone wonders, yes I fixed it as I prefer to fix grammatical errors whenever I can.
-
It has been well known in this group as well as on the NI groups, that the VI password is not meant to protect your multibillion dollar IP from preying eyes. For several years already. It's been like that for quite a while and there is fundamentally no way to make it really significantly more secure. Security in this sense is anyhow not the right term. It's not about security at all but at most about hiding. So don't bring up the topic of security in this respect as it is only misleading noobs into believing that there has to be a way to secure their code from viewing by others. There simply isn't! It's at most a complication but can never be security. And reality is that looking at password protected VIs can give you a great feeling about your satisfied curiosity but it can not give you real advantage in knowledge, because anything of what you learn can and often gets changed between LabVIEW versions. So building your skills through that is a very short lived success, with potentially huge liabilities later on. Especially if you boast about it, as that might be the extra drop in the water that causes someone at NI too look at it and find that this is a feature that has lived for too long already in the dark and is not worthwhile to spend more time on, so it gets axed completely. So your boasting only makes the potential benefit from being in the know even less beneficial for the long term. There are hardly 10000ds of password protected VIs in the whole world, so it is quite meaningless if one tool can remove the password from 100 VIs per minute and some other one can do 1000. Finding those 10000 VIs will cost you a multiple of that time in the first place already.
-
Best Temperature Controller with Labview?
Rolf Kalbermatter replied to Ano Ano's topic in LabVIEW General
I mostly use the 24xx drivers from NI with some minor tweaking sometimes. Might just as well be based on yours. -
Best Temperature Controller with Labview?
Rolf Kalbermatter replied to Ano Ano's topic in LabVIEW General
They are the better ones. Can be tricky to get them hooked up and working through the serial communication, but that is a problem you have with all serial devices in one way or the other. But once they work they just tend to sit there and keep working until the entire system is dismantled, even if that is 15 years later. I had other experiences with West controllers (not my choice). The client having them in his system has to replace at least one of them every half year not only because it ceases to communicate over its communication link, but often even just starts to malfunction in its core task of controlling the process accurately. That is an expensive cost saving! -
The old problem of character encoding when it comes to crossing application borders. Why not create two polymorphic VIs. One specifically doing conversion from the current local to whatever the DB is using as default, and one passing the string entirely unaltered for the case where the user knows his data is already in the right encoding. Even more useful although almost impossible to implement fully would be if you can specify the local encoding and the VI does all the necessary conversion to whatever the db encoding is supposed to be. This is already a nightmare to do, when the db encoding stays constant, but if that can be configured too, then OMG!!!
-
You are contradicting yourself very much here. First you boost the link to that site, only to disqualify it two posts later boosting about your own version that is so much better. 1000 password protected VIs per minute: Woww! Just wondering where you want to get those 1000 VIs. A real hacker wouldn't boost about it and if you are concerned about security you would inform the manufacturer of the software and hope they fix it, but not posting it all over the place.
-
Create your own Generic VI (like Randomize 1D Array)
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
I think your proposed fix is not a fix but just a change of the bug. With the fix as proposed by you you end up with an index in the range -1 to n-1 instead of 0 to n. So you replace the invalid index at the end of the range with one before the range. The -1 should be better placed after the Array Size function. -
Do you configure the CLNs to execute in any thread? At least the MoveBlock is a safe function to call like that, not sure about the sqlite functions of course. And it would seem you are doing the getSize() call in any case to determine if the String contains the entire data? Also try to set the debug level in the MoveBlock() CLN to low. Once that works there is little benefit in debugging in that function call. And of course a C wrapper that does the getSize(), DSNewHdl(), getPtr() and MoveBlock() all in one, won't be possible to be beaten by any LabVIEW diagram function.
-
Well you have to have a memory buffer, here string to write into. So you start either with a string that is for sure long enough and most of the times way to long, or you do a first API call to retrieve the necessary buffer length. Then creating that array or string buffer to pass to the retrieval function should be pretty much the same for stings or byte arrays. No need for a MoveBlock usually. Note: I see that the function in question seems to return a string pointer. so that seems to be the reason for the MoveBlock(). However that function configured to return a string is then very costly as LabVIEW has to scan the whole string for the null byte, allocate a buffer to copy the string into it and return that string buffer. Now there seems to be a function call before to determine the expected size anyhow to verify that the string returned all of the data. So we are at two function calls already. And if then the VI decides that: drat we didn't get all the data, it has to allocate a new buffer anyhow, and copy the data with MoveBlock, and throwing away the incomplete string that was just created. This is really slow and while it works, it's only acceptable if you don't care about performance or it's clear that embedded zero bytes are a big exception. The retrieve pointer API configured to return a pointer sized integer will cost a few nanoseconds but the same API call configured to return a string will be a lot more expensive because it has to determine the string size, allocate a buffer, and do what you might end up doing with the MoveBlock() once more afterwards. It simply can't be faster than the a seperate MoveBlock() call, when you configure the function to return the pointer itself instead of a buffer because it is in fact simply the execution of a get pointer, allocate string array, MoveBlock the data from the pointer into the string array, with the additional strlen call on the pointer to determine the assumed size of the buffer, that might end up to be to small. So the solution with always just 3 API calls will be simpler and in the worst case just about as quick but in some cases considerably faster.
-
Yair already answered this mostly. Those resources are compressed using the zlib deflate algorithme, and the reason is indeed speed. CPUs get faster than harddisk interfaces can keep up with, so it's faster to load n bytes and push them through a decompress algorithme than reading 2 * n bytes directly. And no, newer LabVIEW versions can read uncompressed VI files to be backwards compatible, but they do not save uncompressed VIs anymore. To understand the basic VI file format you can take a look at your Milch website and an old inside Macintosh file format book helps too, since LabVIEW uses mostly the Macintosh resource format for its VI files, but that only gives you a very rough overview of where the global things are, not how each of those resources is actually constructed.
-
Badly in need of vacations . and btw i'm old enough to have earned the right to be grumpy every now and then. While I understand his desire to dig deeper in some other posts he did, I have to say that I do not see any sense in trying to see a specific intention in a pattern on screen that was chosen over 20 years ago, for some reasons, that might be obscure or not. And I would also like to have a peek at the LabVIEW source code, except that I would most likely have to realize that it is just way over my head, so I pass that opportunity and keep myself in the illusion that I might understand it.
-
Why would reading in the data as string first be faster than reading it with MoveBlock() or into a byte array, as it's in both cases one memory buffer copy. Actually the ByteArrayToString has a slight change to be faster than Typecast. That may sound counterintuitive you you think of Typecast in the C way, but the LabVIEW Typecast is a lot more complicated than a C Typecast. It maintains the memory information logically but not necessarily physically such as a C typecast would. For one thing it always employs endianess swapping (not really an issue for byte data as there is nothing to swap there) and in the case of Typecasting an int32 to a float for instance it involves in fact double byte swapping on little endian machines (any LabVIEW machine at the current time except maybe the Power PC VxWorks cRIO systems), once for the native integer format to the inherent flattened format and then again from the flattened format to the native floating point format. So a LabVIEW Typecast is anything but a simple C typecast.
-
Quit LabVIEW existed long before LabVIEW allowed to build executables. As such it's intention was for sure not to be used for executables in the first place. And exactly because of that there was a way to shutdown the entire IDE, as any experiments were executed in the dev system, since that was the only way to execute VIs. So if you wanted to build an experiment that was shutting itself down completely after execution you had to have a way to exit the dev environment too. With the advent of the application builder and the ability to create executabls, the Quit LabVIEW node got pretty much obsolete but wasn't a big enough legacy burden to be really removed. The root window is a hidden window that handles all the message interaction with the operating system. That is a LabVIEW speciality to implement a Mac OS functionality where system OS messages are always sent to the root loop of a process. So they created a hidden root window which does this root loop handling on Windows, such that the rest of LabVIEW messaging could stay pretty unchanged from the code used for Mac OS There is a ini setting hideRootWindow=True that you can add to your executable ini file to make the root window button not appear in the taskbar, although I'm not sure that should be still necessary in recent LabVIEW versions. As to your problem, it pretty much looks to me like a corrupted installation of the runtime system or some driver you are accessing in your appllication. Do you access any hardware in your app? Or ActiveX components or DLLs? Any of these could cause this problem if you don't properly close any an all resources that you open up during application lifetime. LabVIEW has no way to force a DLL or ActiveX out of memory if that DLL thinks it still wants to stay in memory because a resource it provides hasn't been closed properly.
-
I'm not sure where you want to go with this. Speculating about intentions or not when there is a specific result that has been there since at least LabVIEW 2.0 is IMHO a moot point. To me a string always looked like a Meander but not strictly as it morpsh into a somewhat different pattern in vertical lines. Would it worry me? No absolutely not, as long as it looks different enough tho anything else to allow me to distinguish it from other datatypes. Ever looked at clusters and there "non-flat" clusters? flat clusters are clusters that can be typecasted and they are brown, while non-flat clusters can not be typecasted but only flattened and they are pink too. And your format you claim was NIs real intention has no merits, since the borders are to small to be drawn. The line itself is already only one pixel and a pixel is still the smallest which can be drawn on modern screens, so that your small borders are simply not possible to be drawn. So intention or not it's not what LabVIEW does and therefore any discussion about what the intention may have been is pretty useless.
-
User refnums are for instance used by the DOM XML Library. They are indeed not documented, but not so much a LabVIEW API to call as much more a combination of an external shared library with a specific API interface and a text document describing that API to the LabVIEW object manager, such that the DLL gets linked to properly when you use property and method nodes on the according user refnum. It's a powerful tool to extend LabVIEW with libraries without much of LabVIEW VIs involved. And it works from LabVIEW 7 until 2011 without real issues, but there is no guarantee that it could not be necked in a coming version. While it's theoretically imaginable to interface an SQL database through a script node I think it is highly unpractical. The script node at least in the version as is documented in lvsnapi.h and which is the only information I have available is meant to work on a local session context to the particular script node. Much like your sqlite connection, and passing this connection around to various script nodes is highly complicated and also delivers no real benefit, since the script contents is static. You can't change the script text at runtime, not even with VI scripting as that is considered an edit operation that can only occur at edit time. So you end up writing in stone your database interface which is very seldom how you want to access databases. At least some of the query parameters are usually dynamic and while you could pass that into the script node as parameter, your script node needs to be able to interpret the entire script, so you need some parser too. The script node interface simply receives the text, and the list of parameters and has to do something with it. Also the supported parameter types are somewhat limited. So you end up either with a script node that can only contain the SQL text you pass to a method, and does always implement a specific SQL statement sequence or you need to add some intermediate parser that gives you more flexibility in what you can put into the scriptnode besides the SQL statements.
-
Personally I find subroutine priority not an issue, if applied sparingly and very specifically. But once someone starts to apply this to just about any function in a library he made that library a clear trashcan candidate in my eyes. Blind optimization like this is about 10 times worse than no optimization at all. If there are specific functions, like in this case a function that might retrieve a single data item from a result set and therefore is called potentially 1000 of times in a normal operation, subroutine priority may make sense, if you know for sure that this function is fast and uninterruptable. With fast I mean that the function should not go through an entire hierarchy of driver layers and what else to do its task and it should not involve any operation that may be blocked or interruptable such as any IO operation like disk or even worse network access. If you know that this function accesses already prepared data stored in the database refnum or result set refnum, then a subroutine VI is a responsible choice, but otherwise it is just a disaster waiting to happen. Also consider that subroutine VIs are not debuggable anymore so you really don't want to have that through your entire LabVIEW VI library. Applying subroutine priority to VIs that are not for sure executed very repeatably in loops is lazyness and wrong applied optimization, with nasty costs such as making the library hard to debug and potentially locking yourself completely up. As to fixing your threading issue with retrieving error information, my choice here would be to write a C wrapper around the sqlite DLL that returns the error code as function return value and since I'm already busy, would also take care of things like LabVIEW friendly function parameters where necessary, semaphore locking of connections and other refnums where useful and even the dynamic loading of selectable sqlite DLLs if that would be such a dear topic to me. And I might create a solution based on user refnums, so that the entire access to the interface is done through Property and Method Nodes.
-
What exactly is SuperSecretPrivateSpecialStuff for?
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
Maybe start reading this. I can't say I understand it on more than a very superficial level. Personally I think the result of this node is likely a result of exposing some interna in order to help debugging the DFIR and LLVM functionality without always having to dig into the C++ source code debugger itself. Following memory dumps from the C source debugger is immensely frustrating and error prone so creating a possibility to see what the DFIR algorithm created in order to debug optimization problems is so much more easy. Without access to the GenAPI and the LLVM integration into it, the result is however likely not very useful for someone. By the way is the user name "xxx" in your dump the result of an after the fact edit of the dump to hide your identity or the result of running such exercises in a special user account to avoid possible interferences with the rest of your system by such exercises? For someone in the knows the binary data in the beginning could also contain interesting clues. -
If you specify the path in the configuration dialog, the DLL is loaded at load time. If you specify it through the path parameter it is loaded at runtime.
-
For protection from relays you should definitely look at reverse diodes across the coil. When you switch of a coil it always produces a reverse EM voltage and that can be a high multiple of the operating voltage. So you end up easily with over 100 V reverse voltage with a 12 V relays. I believe it's this revers EM voltage which could cause the effects you describe, if it doesn't destroy the driver transistor first. Additional protection could be achieved with ferrit beads or ferrit filters that filter the high frequency components that are created when switching of the relay and the reverse EM voltage is suddenly applied. Even-though the protection diode will limit that voltage and allow the current to be dissipated over time, there still exists high frequency components from the switching that can travel through the circuitry and into your computer unless you put some filters in that path. Also important is of course a solid ground plane. If you force those relay currents through small traces and don't at least connect the grounds in some star formation, you can end up with huge transient ground voltage differences during the switching.