Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. I wouldn't count on that! While it is theoretically possible, the implementation of many LabVIEW nodes is on a different level than the actual DFIR and LLVM algorithmes that allow for dead code elimination and invariant code optimizations. The LabVIEW compiler architecture does decompose loops and other structures as well as common functions like some array functions into DFIR graphs but other nodes such as File IO (Open Read, Write, Close) or most likely also Obtain Queue are basically mostly just calls into precompiled C++ functions inside the LabVIEW kernel and DFIR and LLVM can not optimize on that level.
  2. You forget to take into account that almost all systems nowadays are set to synchronize their time to a time server (if they have any internet connection). This means that the RTC is periodically adjusted to whatever the configured timeserver reports (and smarter algorithms will attempt to do this adjustment incrementally rather than in one big step). The timer tick however is simply an internal interrupt timer derived from the crystal oscillator that drives the CPU. This oscillator is not very accurate as it really doesn't matter that much if your CPU operates at 2.210000000000 GHz or 2.21100000000000 GHz. The timer tick is used to drive the RTC in between time server synchronizations but not more.
  3. error 1097 means that the function somehow run into a problem like addressing invalid memory. This could happen if you pass a pointer to the MoveBlock function that does not point to memory long enough for the function to work on. For instance if you would pass an LabVIEW String which has been preallocated with 1000 characters and tell it to copy 2 * 1000 = 2000 bytes into it because the source is in Unicode. Or you forgot to preallocated the output buffer altogether or you do some other miscalculation when calculating the number of bytes to copy into the target pointer.
  4. The code is part of the code capture tool here on Lava. And a thread with that library alone is here: http://forums.ni.com/t5/LabVIEW/LV-slows-down-extremely-when-having-large-contents-in-clipboard/m-p/300818#M157084
  5. Please mention in the future if you do a crosspost as that helps potential people who want to answer, to see if they can add anything beneficial that hasn't already been answered in the other thread. There is no way to create a LabVIEW VISA resource in your calling C code AFAIK. The VISA resource is an internal LabVIEW datatype handle that contains both the underlaying VISA session as well as the name of the VISA Resource as used to open that resource and possibly other LabVIEW internal information. The only entity that knows how to create this resource is in the LabVIEW kernel. But your C application does not link to the LabVIEW kernel at all, and there is no easy way to determine which LabVIEW kernel your DLL uses to link to it. Even if you knew how to link to the LabVIEW kernel (runtime engine) that the DLL uses, there are no documented function in there to deal with VISA resources. Basically what you want to do is doomed. Instead you should replace the VISA resource Name control by a string control. Then you can configure it as string in the DLL interface and treat it as such in your C code. The only possible drawback is that in the DLL VI the VISA resource will be looked up every time you call that function, as the LabVIEW kernel will have to find any already opened session with that name or create a new one if it doesn't yet exist. A little search magic on the NI forum would have given you that answer too.
  6. Yes, you are missing the fact that the two arrays in the struct are inlined and not a pointer. Even if it was a pointer, the LabVIEW array handle is NOT equivalent to a C array pointer and neither to an inlined array. What you need to do is replacing your arrays with a cluster of 100 U8 elements and 6 U8 elements respectively. Creating a cluster constant with 100 elements is rather inconvenient and time consuming. You best create these by converting an U8 array constant to the according cluster with the Array to Cluster primitive and then not forgetting to set the cluster size by right-clicking on the node and selecting the "Cluster Size ..." dialog. Then assemble the cluster/struct with a Bundle node.
  7. They of course can't do that out of the box. A VI in one application context does share absolutely no data with the same VI in another application context out of its own. That is even true if you run the FGV in two separate projects on the same computer and even more so if they run on different devices or different LabVIEW versions. As Jordan suggested you will need to implement some real interapplication communication here, either through network variables (most easy once you got the hang of it how to configure them and deploy them to the targets), or your own network communication interface (my personal preferences in most cases). There are also reference design projects for cRIO applications such as the CVT (Current Value Table) and accompagning CCC (CVT Client Communication) that show a possible approach for implementing the second solution.
  8. Telco's have indeed to deal with this all the time. And I would suspect these components to be a significant part of the cost of any switching station they put somewhere in the field. It could very well be that the price of their solutions would kill your superiors, when they hear it . I only remember something about what they did for military grade radio devices at a place I used to work more than 20 years ago. They put in so called Transzorb diodes into the device for every signal path that was going to any outside connecter. Quite a few dozen in fact and each of them cost something like $ 100 back then and could be damaged easily during the soldering process of the PCBs. Granted they were supposed to not just help with lightning but even the EMP that would happen during some nuclear explosion but if that really would work and really would still matter when it happened is another story.
  9. My OCD is not that bad. I can usually contain myself unless I'm working on an algorithm where I lost all inspiration and need some meditation moment!
  10. Fully agree and scan any links in a post for known site names. The last spammy post contains a much "promoted" site name.
  11. Yeah, I sometimes like to align functions inside multiframe structures too, so they are at the same place in all frames. But usually only if they do similar things and as form of meditation, while thinking about the rest of the algorithm.
  12. It's not about wires going behind VIs without being connected; that is something I have done myself maybe 5 times in over 20 years of LabVIEWing. But seeing to which terminal a wire is really connected. Of course that may seem often unnecessary because of different and incompatible datatypes of the terminals, but can be a real gotcha when a VI has several datatype compatible terminals. And yes it is not as important in my own VIs since I know how I wired them, but inheriting VIs from someone else is always much easier if such rules have been followed.
  13. I think that is triple-click. For some reasons I can't get myself to use that, but that could be also because I do not have a super-duper gaming mouse nor any experience in playing shooter games, or most any other computer games for that matter
  14. Are you a network engineer? This is quite specialized matter and I would not dare to start doing this without consulting a specialist in installing network infrastructure. The wrong protection circuit could rather slow down your network traffic and not really protect your network much from real surges. In general there is almost no technology that will not at least get fried itself by a direct lightning stroke to the installation or any attached appliance. But for less strong environmental impacts you still need to know quite a bit about both the network characteristics in question and the possible surges that you have. Is it static electricity or rather something else?
  15. I can understand that sentiment But my rule comes from the fact that if I move a VI (by nudging it one pixel up and to the side with the cursor) to see if the wires are correctly attached to the right terminal, and yes that has been and is always one of my first methods when debugging code that appears to behave strangely, I want to see the wires move with it so I know approximately that they indeed are attached to the terminal they appear to. With hidden bends you don't have that at all and need to move the node sometimes many steps to see if they really attach correctly. And shift-cursor is not a good way to do it. And to Shaun and others, be happy I'm not in the LabVIEW development team. I would penalize any VI call that is not using the 4*x*x*2 connector pane by an extra 1 ms delay, and document it as a special optimization in the compiler of the 4*x*x*4 connector pane pattern.
  16. Never hide bends!! That is über evil! But reduce them as much as possible by aligning the nodes on the error cluster.
  17. Well Win8 RT is for several reasons not an option. Any win8 RT application has to be a full .Net application as it is based on the virtual machine execution of .Net to achieve hardware independence (so it works on Arm, Risc, and x86 CPUs alike). But the Pipes library uses Windows APIs that are prone to very slight platform differences in the kernel. While Windows tries to maintain backwards compatibility as much as possible, this API is exercised infrequently enough that they can let slip a few minor incompatibilities between Windows versions.
  18. I wouldn't be surprised if the pipes offer higher throughput than network sockets. They are implemented in the Windows kernel most likely with some form of shared memory pool that is mapped into both processes. As such they do short circuit quite some overhead compared to going through the Winsock library. However I have no numbers available. As to the latest version of the available code, the earlier link to the CVS repository on sourceforge is indeed the most recent one that is available and more or less working. I did some more trials on this but didn't get any more reliable operation out of it, and also there is a good chance that it might have additional issues on Windows 8. This part of the Windows API is both rather complex and involved and can be influenced by many internal changes to the Windows kernel.
  19. There is not an easy solution. The only proper way would be to implement some DLL wrapper that uses the Call Library Node callback methods to register any session that gets opened in some private global queue in the wrapper DLL. The close function then removes the session from the queue. The CLN callback function for abort will check the queue for the session parameter and close it also if found. That CLN callback mechanism is the only way to receive the LabVIEW Abort event properly in external code.
  20. He just made a post replying to another post of one of his "buddies" asking if they have a trial version. Cute! Maybe Michael needs to add a specific check and reject any post with a link to yiiglo.com, rasteredge.com, and businessrefinery.com. Also interesting to know is that the "Company" link on the side arronlee likes to promote so much does not work at all. Always nice to do (online) business with a company that could literally sit in cloud 7 as soon as you are not happy about something.
  21. Well, on that note I have all versions of LabVIEW installed on my system since about 5.0. A few months back LabVIEW 8.2.1 started to crash on startup but I didn't have any urgent need for that to work so left it at that. I did regularly check if it still crashed to because there was a potential project that might need a minor maintenance work in the near feature. Just before installing LabVIEW 2012 I tested again and it still crashed. After installing LabVIEW 2012 SP1 and the according device driver DVD I tried again and it now worked. And no, I prevented the DADmx driver from removing any support from the LabVIEW 8.2 directory by hiding it (and the other versions, the DAQmx intaller wants to rob of all DAQmx VIs) during the install! So while 2012 may be more stable, the underlaying device drivers can make a much bigger difference.
  22. Those stubs could be the culprit. Your DLLs may in the initialization routine (the code that gets executed automatically when the DLL is loaded into memory) do call some of these stubs expecting certain values or behavior and getting stuck in an endless loop waiting for these to change. Without seeing the DLL source code this is almost impossible to debug though. During the initialization routine of the DLL, even on Windows the system is more or less monopolized for the current process which can result in a very sluggish or even completely locked up system. If you have a chance to look at the source code or talk to the developer of the DLL, make sure they are not doing anything complicated in the DLLMain() function. That is the function called on loading and unloading of the DLL. In fact there are a lot of things you are not allowed to do in there at all, according to MS, one of them is for instance trying to load other DLLs dynamically, as that will have a good chance to lockup your system in a nice deadlock.
  23. No, definitely not! 4*2*2*4 should be the standard and strictly enforced for all LabVIEW programmers, if I had a say in this! And anyone using the 6*4*4*6 for a VI that is not private to the library should be banned from writing LabVIEW programs.
  24. Or it might be that the cell boundary calculation was unnecessarily done in all updates for each cell. I doubt NI would not have some clipping optimization when updating for instance the cell background of many cells that they would even attempt to draw anything on the screen that will not be visible. They do have to go into the right cell and update its attributes accordingly of course so the cell can display correctly when scrolled into the visible viewport. So your optimizations in the past mainly may have reduced the number of times cell boundaries were recalculated . Now with them gotten out of the way your optimizations might not harm but likely won't improve the speed much anymore. And beware of changing the cell height accidentally for one row. That might disable the nice optimization from Christina altogether and get you back to the old situation.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.