Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. The old problem of character encoding when it comes to crossing application borders. Why not create two polymorphic VIs. One specifically doing conversion from the current local to whatever the DB is using as default, and one passing the string entirely unaltered for the case where the user knows his data is already in the right encoding. Even more useful although almost impossible to implement fully would be if you can specify the local encoding and the VI does all the necessary conversion to whatever the db encoding is supposed to be. This is already a nightmare to do, when the db encoding stays constant, but if that can be configured too, then OMG!!!
  2. You are contradicting yourself very much here. First you boost the link to that site, only to disqualify it two posts later boosting about your own version that is so much better. 1000 password protected VIs per minute: Woww! Just wondering where you want to get those 1000 VIs. A real hacker wouldn't boost about it and if you are concerned about security you would inform the manufacturer of the software and hope they fix it, but not posting it all over the place.
  3. I guess the thinking is to not show popups, when being controlled remotely, but only when in local control. But it would seem more logical to make the application generally such that popups are not required, or separate the UI for remote control from the UI for local control,
  4. Well I mentioned meanders before. Can't find an english wikipedia entry so here it goes in German. And then specifically the second line in the big picture.
  5. I think your proposed fix is not a fix but just a change of the bug. With the fix as proposed by you you end up with an index in the range -1 to n-1 instead of 0 to n. So you replace the invalid index at the end of the range with one before the range. The -1 should be better placed after the Array Size function.
  6. Do you configure the CLNs to execute in any thread? At least the MoveBlock is a safe function to call like that, not sure about the sqlite functions of course. And it would seem you are doing the getSize() call in any case to determine if the String contains the entire data? Also try to set the debug level in the MoveBlock() CLN to low. Once that works there is little benefit in debugging in that function call. And of course a C wrapper that does the getSize(), DSNewHdl(), getPtr() and MoveBlock() all in one, won't be possible to be beaten by any LabVIEW diagram function.
  7. Well you have to have a memory buffer, here string to write into. So you start either with a string that is for sure long enough and most of the times way to long, or you do a first API call to retrieve the necessary buffer length. Then creating that array or string buffer to pass to the retrieval function should be pretty much the same for stings or byte arrays. No need for a MoveBlock usually. Note: I see that the function in question seems to return a string pointer. so that seems to be the reason for the MoveBlock(). However that function configured to return a string is then very costly as LabVIEW has to scan the whole string for the null byte, allocate a buffer to copy the string into it and return that string buffer. Now there seems to be a function call before to determine the expected size anyhow to verify that the string returned all of the data. So we are at two function calls already. And if then the VI decides that: drat we didn't get all the data, it has to allocate a new buffer anyhow, and copy the data with MoveBlock, and throwing away the incomplete string that was just created. This is really slow and while it works, it's only acceptable if you don't care about performance or it's clear that embedded zero bytes are a big exception. The retrieve pointer API configured to return a pointer sized integer will cost a few nanoseconds but the same API call configured to return a string will be a lot more expensive because it has to determine the string size, allocate a buffer, and do what you might end up doing with the MoveBlock() once more afterwards. It simply can't be faster than the a seperate MoveBlock() call, when you configure the function to return the pointer itself instead of a buffer because it is in fact simply the execution of a get pointer, allocate string array, MoveBlock the data from the pointer into the string array, with the additional strlen call on the pointer to determine the assumed size of the buffer, that might end up to be to small. So the solution with always just 3 API calls will be simpler and in the worst case just about as quick but in some cases considerably faster.
  8. Yair already answered this mostly. Those resources are compressed using the zlib deflate algorithme, and the reason is indeed speed. CPUs get faster than harddisk interfaces can keep up with, so it's faster to load n bytes and push them through a decompress algorithme than reading 2 * n bytes directly. And no, newer LabVIEW versions can read uncompressed VI files to be backwards compatible, but they do not save uncompressed VIs anymore. To understand the basic VI file format you can take a look at your Milch website and an old inside Macintosh file format book helps too, since LabVIEW uses mostly the Macintosh resource format for its VI files, but that only gives you a very rough overview of where the global things are, not how each of those resources is actually constructed.
  9. Badly in need of vacations . and btw i'm old enough to have earned the right to be grumpy every now and then. While I understand his desire to dig deeper in some other posts he did, I have to say that I do not see any sense in trying to see a specific intention in a pattern on screen that was chosen over 20 years ago, for some reasons, that might be obscure or not. And I would also like to have a peek at the LabVIEW source code, except that I would most likely have to realize that it is just way over my head, so I pass that opportunity and keep myself in the illusion that I might understand it.
  10. Why would reading in the data as string first be faster than reading it with MoveBlock() or into a byte array, as it's in both cases one memory buffer copy. Actually the ByteArrayToString has a slight change to be faster than Typecast. That may sound counterintuitive you you think of Typecast in the C way, but the LabVIEW Typecast is a lot more complicated than a C Typecast. It maintains the memory information logically but not necessarily physically such as a C typecast would. For one thing it always employs endianess swapping (not really an issue for byte data as there is nothing to swap there) and in the case of Typecasting an int32 to a float for instance it involves in fact double byte swapping on little endian machines (any LabVIEW machine at the current time except maybe the Power PC VxWorks cRIO systems), once for the native integer format to the inherent flattened format and then again from the flattened format to the native floating point format. So a LabVIEW Typecast is anything but a simple C typecast.
  11. Quit LabVIEW existed long before LabVIEW allowed to build executables. As such it's intention was for sure not to be used for executables in the first place. And exactly because of that there was a way to shutdown the entire IDE, as any experiments were executed in the dev system, since that was the only way to execute VIs. So if you wanted to build an experiment that was shutting itself down completely after execution you had to have a way to exit the dev environment too. With the advent of the application builder and the ability to create executabls, the Quit LabVIEW node got pretty much obsolete but wasn't a big enough legacy burden to be really removed. The root window is a hidden window that handles all the message interaction with the operating system. That is a LabVIEW speciality to implement a Mac OS functionality where system OS messages are always sent to the root loop of a process. So they created a hidden root window which does this root loop handling on Windows, such that the rest of LabVIEW messaging could stay pretty unchanged from the code used for Mac OS There is a ini setting hideRootWindow=True that you can add to your executable ini file to make the root window button not appear in the taskbar, although I'm not sure that should be still necessary in recent LabVIEW versions. As to your problem, it pretty much looks to me like a corrupted installation of the runtime system or some driver you are accessing in your appllication. Do you access any hardware in your app? Or ActiveX components or DLLs? Any of these could cause this problem if you don't properly close any an all resources that you open up during application lifetime. LabVIEW has no way to force a DLL or ActiveX out of memory if that DLL thinks it still wants to stay in memory because a resource it provides hasn't been closed properly.
  12. I'm not sure where you want to go with this. Speculating about intentions or not when there is a specific result that has been there since at least LabVIEW 2.0 is IMHO a moot point. To me a string always looked like a Meander but not strictly as it morpsh into a somewhat different pattern in vertical lines. Would it worry me? No absolutely not, as long as it looks different enough tho anything else to allow me to distinguish it from other datatypes. Ever looked at clusters and there "non-flat" clusters? flat clusters are clusters that can be typecasted and they are brown, while non-flat clusters can not be typecasted but only flattened and they are pink too. And your format you claim was NIs real intention has no merits, since the borders are to small to be drawn. The line itself is already only one pixel and a pixel is still the smallest which can be drawn on modern screens, so that your small borders are simply not possible to be drawn. So intention or not it's not what LabVIEW does and therefore any discussion about what the intention may have been is pretty useless.
  13. User refnums are for instance used by the DOM XML Library. They are indeed not documented, but not so much a LabVIEW API to call as much more a combination of an external shared library with a specific API interface and a text document describing that API to the LabVIEW object manager, such that the DLL gets linked to properly when you use property and method nodes on the according user refnum. It's a powerful tool to extend LabVIEW with libraries without much of LabVIEW VIs involved. And it works from LabVIEW 7 until 2011 without real issues, but there is no guarantee that it could not be necked in a coming version. While it's theoretically imaginable to interface an SQL database through a script node I think it is highly unpractical. The script node at least in the version as is documented in lvsnapi.h and which is the only information I have available is meant to work on a local session context to the particular script node. Much like your sqlite connection, and passing this connection around to various script nodes is highly complicated and also delivers no real benefit, since the script contents is static. You can't change the script text at runtime, not even with VI scripting as that is considered an edit operation that can only occur at edit time. So you end up writing in stone your database interface which is very seldom how you want to access databases. At least some of the query parameters are usually dynamic and while you could pass that into the script node as parameter, your script node needs to be able to interpret the entire script, so you need some parser too. The script node interface simply receives the text, and the list of parameters and has to do something with it. Also the supported parameter types are somewhat limited. So you end up either with a script node that can only contain the SQL text you pass to a method, and does always implement a specific SQL statement sequence or you need to add some intermediate parser that gives you more flexibility in what you can put into the scriptnode besides the SQL statements.
  14. Personally I find subroutine priority not an issue, if applied sparingly and very specifically. But once someone starts to apply this to just about any function in a library he made that library a clear trashcan candidate in my eyes. Blind optimization like this is about 10 times worse than no optimization at all. If there are specific functions, like in this case a function that might retrieve a single data item from a result set and therefore is called potentially 1000 of times in a normal operation, subroutine priority may make sense, if you know for sure that this function is fast and uninterruptable. With fast I mean that the function should not go through an entire hierarchy of driver layers and what else to do its task and it should not involve any operation that may be blocked or interruptable such as any IO operation like disk or even worse network access. If you know that this function accesses already prepared data stored in the database refnum or result set refnum, then a subroutine VI is a responsible choice, but otherwise it is just a disaster waiting to happen. Also consider that subroutine VIs are not debuggable anymore so you really don't want to have that through your entire LabVIEW VI library. Applying subroutine priority to VIs that are not for sure executed very repeatably in loops is lazyness and wrong applied optimization, with nasty costs such as making the library hard to debug and potentially locking yourself completely up. As to fixing your threading issue with retrieving error information, my choice here would be to write a C wrapper around the sqlite DLL that returns the error code as function return value and since I'm already busy, would also take care of things like LabVIEW friendly function parameters where necessary, semaphore locking of connections and other refnums where useful and even the dynamic loading of selectable sqlite DLLs if that would be such a dear topic to me. And I might create a solution based on user refnums, so that the entire access to the interface is done through Property and Method Nodes.
  15. Maybe start reading this. I can't say I understand it on more than a very superficial level. Personally I think the result of this node is likely a result of exposing some interna in order to help debugging the DFIR and LLVM functionality without always having to dig into the C++ source code debugger itself. Following memory dumps from the C source debugger is immensely frustrating and error prone so creating a possibility to see what the DFIR algorithm created in order to debug optimization problems is so much more easy. Without access to the GenAPI and the LLVM integration into it, the result is however likely not very useful for someone. By the way is the user name "xxx" in your dump the result of an after the fact edit of the dump to hide your identity or the result of running such exercises in a special user account to avoid possible interferences with the rest of your system by such exercises? For someone in the knows the binary data in the beginning could also contain interesting clues.
  16. If you specify the path in the configuration dialog, the DLL is loaded at load time. If you specify it through the path parameter it is loaded at runtime.
  17. For protection from relays you should definitely look at reverse diodes across the coil. When you switch of a coil it always produces a reverse EM voltage and that can be a high multiple of the operating voltage. So you end up easily with over 100 V reverse voltage with a 12 V relays. I believe it's this revers EM voltage which could cause the effects you describe, if it doesn't destroy the driver transistor first. Additional protection could be achieved with ferrit beads or ferrit filters that filter the high frequency components that are created when switching of the relay and the reverse EM voltage is suddenly applied. Even-though the protection diode will limit that voltage and allow the current to be dissipated over time, there still exists high frequency components from the switching that can travel through the circuitry and into your computer unless you put some filters in that path. Also important is of course a solid ground plane. If you force those relay currents through small traces and don't at least connect the grounds in some star formation, you can end up with huge transient ground voltage differences during the switching.
  18. Well even in Open Source projects it's often so that the developers have created helper tools that they may not distribute openly. And no, as long as you are the developer of the code and don't distribute the result there is no Open Source license which obligates you to distribute the source, not even GPL. In the case of commercial applications it's a total fantasy to expect or even hope to get all the internal tools of the software manufacturer too. That would mean among other things also license generators and what else, and you know where that would lead.
  19. Well how do you think did they do the first controls of a new data type? Probably something like handcoding with specially compiled LabVIEW that has special tools included. And about how the compiler gets confused, I'm sure you will never hear a detailed info. For one thing this is NI internal, and for another thing unless you understand a project like LLVM from ground up, it would make no sense to you if they would give you a more technical explanation. Go study LLVM and once you understand it, you may be qualified to understand at least in parts what all might go wrong there.
  20. I can only echos slacter's recommendation. I have never used the Quit LabVIEW primitive in my 20 years of LabVIEW programming, other than to try it out. In the development environment I don't want to quit LabVIEW normally anyhow, and in a built application, the executable terminates as soon as you close the last front panel. So this is usually what I do as last thing in my main VI after all loops have exited.
  21. I'm pretty sure it is what the Microprocessor C Development Toolkit uses and as such this function will not do much useful if you don't have a valid license for that toolkit. Remember that LabVIEW has a license management system (at least under Windows) and that much of this functionality is protected through that. As such if you have the license for the Toolkit you have the functionality much more conveniently available in the Tools menu, and if you don't have the license, this method won't do anything but give an error.
  22. All the documentation for that is in the C source code to the LabPython DLL in the sourceforge repository. But note that you can't get away without writing a very specific DLL. And LabPython does very tricky dynamic loading to allow separation of the actual LabPython core functionality from the scriptnode plugin. Without that you get into trouble since the LabVIEW VIs wouldn't search in the script node plugin for the DLL.
  23. Now, 25ns really amazes me! That for a loop that needs to compare several dozen characters. Probably optimized to operate on 4 bytes integers instead of on individual characters. Or maybe LabVIEW nowadays does use dirty flags for its data handles but that seems rather unlikely. An Always Data Copy in the wire to the path before passed to the CLN should eliminate any cached dirty flags. And that LVOOP might be an important part of the picture, would not surprise me at all.
  24. Always nice to have real numbers I guess my estimations are still based on my times when working with 66MHz i486 CPUs. A modern Dual Core should hopefully smash that into pieces of course.
  25. Why? This color much more clearly states the situation about those nodes, than your suggestion . It screams at the user: go away, don't look at me, don't even think about it!!!!! Besides the color you suggest is used in some of my company internal libraries already so I have a first use right on that!
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.