Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. It's not the Open VI Reference Node in itself that blocks out the UI thread but the load operation of a VI hierarchy. This piece of code is very delicate as it needs to update large amounts of global tables that keep track of all loaded VIs, their relationship to each other, linking information and what else that I couldn't even dream up. So yes I'm sure the Open VI Reference and the application load operation do use the same LoadVIHierarchy() function and that this function is the UI blocking culprit. And I'm afraid calling a user VI as callback during this operation could be fairly costly. LabVIEW doesn't lock the UI thread for fun during this operation and invoking VIs may likely have a possible impact on the resources that this locking tries to protect, so that the locking may have to be released for the duration of the VI invoke. This could add up to quite a bit more than a few 100 us for each call. Anyhow even if you get this callback that tells you about the operation, be it a user event or a specific VI invoke operation, how do you suppose to give any feedback to the user if the UI thread is blocked?
  2. Why would you use string arrays, if the example you gave in your first post only contains numbers? Also I think your data structure design is highly flawed, if the data in the first post is all you want to sort. What you should look at is to use a 1D array of integers for your first array and an 1D array of cluster containing an integer and a float for your second array and the result.
  3. Shaun, the problem with this is that events are processed in the UI thread and the Open VI Reference blocks the UI thread very effectively. The reason for that is that it needs to update various global tables frequently during the load and can't have any other part of LabVIEW potentially messing with these tables while it is busy. And unlocking the UI thread for the duration of the event is a bad idea as it would inevitably add a significant overhead to the load operation that would be without doubt noticable. What I do is placing an animated GIF on the Splash screen. It runs even when the VI is not executing , but a user of the application doesn't notice that. However it is only a partly satisfactury solution since the GIF does animate properly in the IDE during the Open VI Reference call, but seems to stop in a built executable anyhow momentarily. I solve it by making the Main VI itself loading various components dynamically so that the load operation is really divided in several Open VI Reference operations. The Splash Screen simply has a string control that the main can send messages too, for status information, and waits until the main is deciding to open its Front Panel, indicating that it is ready to take over from now. I don't think that suggestion is thought out well. A separate SSH protocol library is of very little use, as it only implements the SSH protocol itself but wouldn't proliferate on the used SSL capabilities for other protocols. I would rather see a SSL support for TCP sockets that can be configured transparently. I tried to start with something like this in the network library that I posted quite some time ago here on lavag. The idea is that it is possible to setup SSL parameters and register them for a network socket, so it uses them automatically. This is because SSL really is meant to be transparent in terms of protocol handling to higher located protocols like HTTPS, which is really just the HTTP protocol transferred through a SSL secured socket.
  4. I think you should not only terminate on a Connection Closed by Peer error but just about any possible error except maybe the timeout error, although that is debatable too. And yes you would want to filter the Connection Closed by Peer error after the while loop, since that is a valid way to terminate the connection and not an error in terms of the user of your HTTP Get function. But error 56 is definitely a timeout error, Connection Closed by Peer is error code 66
  5. Good point that dataflow inherently solves some cases that futures might be used for in conventional languages.
  6. Well you may want it at some point but not necessarily in the same application and just as a copy of files on the harddisk. But you are right, whatever you want it is usually never as trivial as just shooting of the request and forgetting it. You want for instance usually be informed if one of the downloads didn't succeed and you also want a way to not have the library wait forever on never received data. Java threading can be a bit more powerful than just blindly spawning threads. If you make consequently use of the java.util.concurrent package, you can create very powerful systems that can employ various forms of multithreading with very little programming effort. At the core are so called executors that can have various characteristics such as single threads, bounded and unbounded threadpools and even scheduled variants of them. Especially thread pools are very handy, since creation and destruction of threads is a very expensive operation, specifically under Windows. By using threadpools you only have that penalty once and still can dynamically assign new "Tasks" to those threads. You are fully right here, but the actual threadpool configuration in LabVIEW is very static. There is a VI in vi.lib that you can use to configure the numbers of threads for each execution system, but this used to require a LabVIEW restart in order to make those changes effective. Not sure if that is still the case in the latest LabVIEW versions. LabVIEW itself implements still some sort of cooperative multithreading on top of the OS multithread support just as it always did even before LabVIEW got OS thread support in 5.0 or 5.1. So I would guess that the situation is not necessarily as bad, since you can run multiple VIs in the same execution system and LabVIEW will distribute the available threads on the actual LabVIEW clumps, as they call the indivisible code sequences they identify and schedule in their homebrew cooperative multitasking system. You do have to be carefull however about blocking functions that cause a switch to the UI thread as that could completely defy the purpose of any architecture trying to implement parallel code execution. Not sure how the new Call Asynchronous fits into this. I would hope they retained all the advantages of the synchonous Call By Reference but added some way of actually executing the according call in some parallel thread like system, much like the Run Method does. But I haven't looked into that yet Edit: I just read up on it a bit, and the Asynchronous Call by Reference looks very much like a Future in itself. No need to employ LVOOP for it, jupiieee! I never intended to say it would be useless, just not as generic and powerfull as the Java version. Lol, I so much missed the hooded smiley here that is sometimes available in other boards. Should have scrolled in the list instead of assuming it isn't there. I knew I was poking some peoples cookies with this, and that is part of the reason I even put it there.
  7. According to the manual it uses a RS-485 interface and supports CompoWay/F, SYSWAY, or Modbus protocol. What Modbus registers to read for the setpoint, current value and other values and how to interpret them should be mentioned in some sort of programming manual or such. The Operation Manual on the site at least doesn't document anything about that.
  8. hoovah is right, LabVIEW does lazy memory deallocation and that is not really a bad thing. Every roundtrip to the OS to deallocate memory that often will need to be allocated a little later on is a rather costly operation. The drawback is, that once LabVIEW hangs on to memory it pretty much keeps that memory for as long as it can but it is usually (baring any bugs) reusing that memory quite efficiently when necessary. This could be bad for other applications if you have a LabVIEW memory grabber in memory and don't want to quit it, but despite of not currently munching on huge data, not giving the other application enough memory. It has also another positive side besides performance in that once LabVIEW was able to get the memory, another application will not be able to grab that memory and make a 2nd run of the memory hungry LabVIEW operation suddenly scream about not enough memory. TDMS is a nice tool but you have to be aware that it can mount up to a lot of data and that reading in this data in one big glob can very easily overrun even modest configured system resources.
  9. Your description is not entirely clear. When you say "I pass from the binary string to an "image data" cluster" you mean that you have the JPG/PNG Data to LV Image in that stream? If so there is to my knowledge a way to do the opposite for the JPG case with the IMAQ Flatten Image to String function but not in the direction you are looking for as the use of the IMAQ Flattten Image Option function and the standard LabVIEW Flatten and Unflatten functions is unfortunately also not a solution since the Flatten adds extra info to the compressed data stream, that the Unflatten seems to expect (although it seems someone in the NI forums had some success with using the resulting data stream as normal JPEG stream, until he moved to LabVIEW 64 bit and got bad results).
  10. I only know them from Java and only enough to use them in not to complicated ways. What they actually do as far as I understand it, is running as a separate thread (which can be a dedicated thread or one from a shared thread pool or a few other variants thereof from the java.util.concurrent package) and do whatever they need to do in the background. If they are there to produce something that will eventually be used at some point in the application you can end up with a blocking condition nevertheless. But they are very powerful if the actual consumer of the future result is not time constrained, such as multiple HTTP downloads for instance. If the HTTP client library is supporting Futures you can simply shoot off multiple downloads by issuing downloads in a tight loop, letting the Future handle the actual download in the background. It could for instance save the received data to disk and terminates itself after that freeing the used thread again. If you don't need the actual data in the application itself you could then forget the Future completely once you have issued it. The way I understand it a Future is some sort of callback that is running in its own thread context for the duration of its lifetime and has some extra functions to manage it such as canceling, checking its status (isDone(), isCancelled()) and even waiting for it's result if that need should arise. What the callback itself does is entirely up to the implementer. Depending on the chosen executor model the HTTP client could still block such as when using a bounded thread pool and you issue more downloads than there are threads available in the thread pool. All that said I do think there are problems to implement Futures in such a way in LabVIEW. LabVIEW does automatic multhtreading management in its diagrams with fixed, bounded thread pools. There is no easy way to reconfigure that threading on the fly. So there is no generic way to implement Futures that run off a theoretically unbounded thread pool should that need arise, and if you start to use Futures throughout your application in many other classes you run into a thread pool exhaustion quickly anyhow. So I don't see how one could implement Futures in LabVIEW in the same way and still stay generic, which is the idea of the Future implementation in Java. The Future itself does not even define the datatype it operates on, but leaves that to the implementor of the class using the Future. That are all concepts where LabVIEW can't fully keep up with (and one of the reason I'm not convinced diving into LVOOP is really worth the hassle )
  11. No C does not have any memory penalty but if you cast an int32 into a 64 bit pointer and ignore the warning and then try to access it as pointer, you crash. LabVIEW is about 3 level higher in terms of high level language than C. It has among other things a strong data type system, a rather effective automatic memory management, no need to worry about allocating array buffers before you can use them and releasing them afterwards, and virtually no possibility to make it crash (if you don't work on interfacing it to external code) and quite a few other things. This comes sometimes at some cost, such as needing to make sure the buffer is allocated. Also the nature of LabVIEW datatypes seperates into two very different types. Those that are represented as handles (strings and arrays) and those that aren't. There is no way LabVIEW could "typecast" between these two fundamental types without creating a true data copy. And even if you typecast between arrays, such as typecasting a string into an array of 32 bit integers, it can't just do so, since the 32 bit integer array needs to have a byte length of a multiple of the element size, so if your input string (byte array) length is not fully dividable by four it will need to create a new buffer anyhow. Currently it does so in all cases, as that makes the code simpler and saves an extra check on the actual fitness of the input array size in the case where both inputs and outputs are arrays. It could of course go and check for the possibility of inlining the Typecast but if your wire happens to run to any other function that wants to use the string or array inline too, it needs to create a copy anyhow. All in all this smartness will add a lot of code to the Typecast function, for a benefit that is only in special cases achievable.
  12. I only know Futures from Java, where they are an integral part of the java.util.concurrent package. And the established way in Java for this is a blocking get() method on the Future, with an optional timeout. The Future also has a cancel() method. To code that up without some potential race condition is however not trivial and Sun certainly had a few trials at it, before getting it really working.
  13. I would almost bet my hat, that that is also what the "Is NaN/Refnum/.." function does. Which makes it likely a similar fast operation as the typecast with following bitmask operation. And because of that I would prefer the explicit and clear use of the Is NaN primitive any time above the much less clear "Advanced Operation" hack, even if it would save a few CPU cycles.
  14. That should work if the NaNs are guaranteed to come from LabVIEW itself but could fail because of the reason you mention, if the NaN could come from somewhere else such as a network bytestream.
  15. Maybe LLVM does that, but I doubt that the original LabVIEW compiler did boolean logic reduction. Even then the difference is likely very small in comparison to the time needed for the Variant case for instance. Two logic operations versus six does not amount to many nanoseconds for sure on modern CPU architectures. And it is clear from a quick glance what asbo did wrong when copying the code. He should have ORed the two isNaN? results, not one of them and the isEqual? result.
  16. Actually this is not so much something that an application does as much more something which a CPU does. If the quiet bit is cleared (or set for CPUs that use it a signaling bit) they can cause an FPU Exception. An application can then install an exception handler to catch them for calculation results and turn them into quiet NaNs which the FPU is supposed to pass through as quiet NaN in any further operation in the FPU. For LabVIEW it doesn't really make much difference as LabVIEW simply implements the NaN handling according to IEEE. If it uses the exception support from the CPU or not is irrelevant for the LabVIEW user, as we simply get the results and don't have a mechanisme to further get informed about exceptions. A LabVIEW NaN is explicitedly with all bits set, but it will treat any number whose exponent is all 1 and its mantissa having at least one set bit as NaN.
  17. Well I have been using this in the past for debugging communication drivers, using it on the Read/Write function to log all the strings sent and received to see how it went bad at some point and try to understand protocol handling issues. As such it is helpful but not exactly user friendly and I have since usually added in such cases a direct debug logger that takes the interesting parameters and writes a string line to a text file on disk. It means writing specific code for each case but is a lot easier to parse and analyze later on than manually stepping through each call iteration. If there would be a method to hook a log VI to a VI instance (or its class, I wonder if the Class Operator might be possible for that and I'm not talking directly about LVOOP but the App method App->Get/Set Class Operator) then that might be more helpful. Such a logger VI would probably receive a data structure that contains a copy/reference to the actual conpane data and would have a pre and post execution call. The Enable Database feature has been discussed in some LabVIEW classes back in the old days (15 to 20 years ago) but has gone into the land of forgettingness since.
  18. Well, that doesn't say if it is guaranteed that each of those produces the LabVIEW canonical NaN number which is 0xFFFFFFFF. IEEE only specifies that any number with an exponent where all bits are set, and a mantissa with at least one bit set equals NaN. The sign bit is irrelevant for NaN. All exponent bits set and the mantissa is 0 means Infinitive, and here the sign bit has of course a meaning. So any number resulting in bit pattern s111 1111 1qxx xxxx xxxx xxxx xxxx xxxx (for 32 bit float) and with at least one bit of x set can be NaN. The s bit is the sign bit which for NaN is not relevant and the q bit is the quiet bit, which should indicate if the NaN will cause an exception or not (for most processors a set bit will be a silent or quiet NaN).
  19. Actually this is not going to fly well. While LabVIEW does use a so called canonical bit pattern to indicate NaN, NaN is defined by IEEE in such a way that an entire range of bit patterns result in a NaN value and LabVIEW does recognize this correctly when looking at a floating point numeric. So depending where your NaN comes from they might be each time NaN but not contain the same bit pattern.
  20. Well, I'm pretty sure they use the libpng kernel for the PNG support in both cases. However as you have certainly found out the interface to libpng is not simple and the possible formats are not trivial to map to a pixel based image format like NI is using. In fact some of the features of PNG are impossible to maintain with a single 2D pixmap and an optional alpha channel. To support memory streams requires quite a bit more work too as libpng supports accessing files directly but requires extra work to support memory streams instead. So it's very easy to take a few shortcuts in one case and end up with less support for the more exotic PNG formats than when using file based input/output.
  21. Todd, if you had read those 921600 bps with one VISA Read per received character it sure enough would have slowed you down There is definitely an overhead with VISA calls, as they go through several software layers of abstraction so minimizing the number of VISA calls definitely is a good idea. I have to say that 12ms per VISA Read does sound a little excessive though. I'm sure I have seen less on computers that were a lot less powerful than nowadays average low cost computer.
  22. Considering that Heap Peak is a totally unsupported, officially non-existing feature in LabVIEW, your thinking is clearly VERY wrong. Most people, including many working within NI, don't know about it and those who know only will admit it, when forced with a gun at their head . Besides as asbo has already hinted to, explaining yourself is always a good idea if you hope to get help from others. These fora are in fact based on almost exclusively voluntary efforts and if I have to start to guess what someone might mean, I'm and most others are very quick to dismiss the post and move to something more interesting. There are simply to many questions in the world to spend time on everyone.
  23. Personally I would prefer if this setting was also available in the Build Specification Properties (and would overwrite a default setting in the Project Properties) than being able to access them in the diagram in any way. Currently I have to make different project files (or use the Pre/Post Build VIs to do some undocumented magic to change those symbols) if I want to have different builds depending on such a symbol. This shouldn't be necessary at all if the conditional symbols idea was properly implemented for the project and not only added as a sort afterthought. I do understand that this can get messy to implement with some settings inheriting from other places and overwriting them in a clear and consistent manner, but that is what I expect from a modern project management tool. And I have refused to use some Microsoft Visual Studio versions because they had some weird inheritance problems with project settings.
  24. While I have some basic information about this in earlier LabVIEW versions (v5), I'm hesitant to publish that since there are certainly differences in LabVIEW versions. I don't have the details about this but I remember having to abandon some code in newer versions that was relying on these bits having certain meanings.
  25. I haven't a lot of experience with the DAQmx networked devices but what I have found was that my LabVIEW application sometimes could access a CompactDAQ chassis, sometimes only after I had accessed the device in MAX and sometimes not at all. This was not an option for the application that needed to be able to startup and access the hardware with no user interaction at all, when starting up the computer. Adding a little code that checks if the DAQmx device was a networked device and calling DAQmx Reserve Network Device.vi on before initializing the DAQmx task in that case has solved that completely and always lets my application startup and access the hardware without having to go into MAX at all. I'm pretty sure that this function call does internally also a device reset, which is no problem for me, and seems to be what you want too.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.