-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
hoovah is right, LabVIEW does lazy memory deallocation and that is not really a bad thing. Every roundtrip to the OS to deallocate memory that often will need to be allocated a little later on is a rather costly operation. The drawback is, that once LabVIEW hangs on to memory it pretty much keeps that memory for as long as it can but it is usually (baring any bugs) reusing that memory quite efficiently when necessary. This could be bad for other applications if you have a LabVIEW memory grabber in memory and don't want to quit it, but despite of not currently munching on huge data, not giving the other application enough memory. It has also another positive side besides performance in that once LabVIEW was able to get the memory, another application will not be able to grab that memory and make a 2nd run of the memory hungry LabVIEW operation suddenly scream about not enough memory. TDMS is a nice tool but you have to be aware that it can mount up to a lot of data and that reading in this data in one big glob can very easily overrun even modest configured system resources.
-
can PNG images have different format?
Rolf Kalbermatter replied to vivante's topic in Machine Vision and Imaging
Your description is not entirely clear. When you say "I pass from the binary string to an "image data" cluster" you mean that you have the JPG/PNG Data to LV Image in that stream? If so there is to my knowledge a way to do the opposite for the JPG case with the IMAQ Flatten Image to String function but not in the direction you are looking for as the use of the IMAQ Flattten Image Option function and the standard LabVIEW Flatten and Unflatten functions is unfortunately also not a solution since the Flatten adds extra info to the compressed data stream, that the Unflatten seems to expect (although it seems someone in the NI forums had some success with using the resulting data stream as normal JPEG stream, until he moved to LabVIEW 64 bit and got bad results).- 14 replies
-
- png
- image decoding
-
(and 1 more)
Tagged with:
-
Futures - An alternative to synchronous messaging
Rolf Kalbermatter replied to Daklu's topic in Object-Oriented Programming
I only know them from Java and only enough to use them in not to complicated ways. What they actually do as far as I understand it, is running as a separate thread (which can be a dedicated thread or one from a shared thread pool or a few other variants thereof from the java.util.concurrent package) and do whatever they need to do in the background. If they are there to produce something that will eventually be used at some point in the application you can end up with a blocking condition nevertheless. But they are very powerful if the actual consumer of the future result is not time constrained, such as multiple HTTP downloads for instance. If the HTTP client library is supporting Futures you can simply shoot off multiple downloads by issuing downloads in a tight loop, letting the Future handle the actual download in the background. It could for instance save the received data to disk and terminates itself after that freeing the used thread again. If you don't need the actual data in the application itself you could then forget the Future completely once you have issued it. The way I understand it a Future is some sort of callback that is running in its own thread context for the duration of its lifetime and has some extra functions to manage it such as canceling, checking its status (isDone(), isCancelled()) and even waiting for it's result if that need should arise. What the callback itself does is entirely up to the implementer. Depending on the chosen executor model the HTTP client could still block such as when using a bounded thread pool and you issue more downloads than there are threads available in the thread pool. All that said I do think there are problems to implement Futures in such a way in LabVIEW. LabVIEW does automatic multhtreading management in its diagrams with fixed, bounded thread pools. There is no easy way to reconfigure that threading on the fly. So there is no generic way to implement Futures that run off a theoretically unbounded thread pool should that need arise, and if you start to use Futures throughout your application in many other classes you run into a thread pool exhaustion quickly anyhow. So I don't see how one could implement Futures in LabVIEW in the same way and still stay generic, which is the idea of the Future implementation in Java. The Future itself does not even define the datatype it operates on, but leaves that to the implementor of the class using the Future. That are all concepts where LabVIEW can't fully keep up with (and one of the reason I'm not convinced diving into LVOOP is really worth the hassle ) -
No C does not have any memory penalty but if you cast an int32 into a 64 bit pointer and ignore the warning and then try to access it as pointer, you crash. LabVIEW is about 3 level higher in terms of high level language than C. It has among other things a strong data type system, a rather effective automatic memory management, no need to worry about allocating array buffers before you can use them and releasing them afterwards, and virtually no possibility to make it crash (if you don't work on interfacing it to external code) and quite a few other things. This comes sometimes at some cost, such as needing to make sure the buffer is allocated. Also the nature of LabVIEW datatypes seperates into two very different types. Those that are represented as handles (strings and arrays) and those that aren't. There is no way LabVIEW could "typecast" between these two fundamental types without creating a true data copy. And even if you typecast between arrays, such as typecasting a string into an array of 32 bit integers, it can't just do so, since the 32 bit integer array needs to have a byte length of a multiple of the element size, so if your input string (byte array) length is not fully dividable by four it will need to create a new buffer anyhow. Currently it does so in all cases, as that makes the code simpler and saves an extra check on the actual fitness of the input array size in the case where both inputs and outputs are arrays. It could of course go and check for the possibility of inlining the Typecast but if your wire happens to run to any other function that wants to use the string or array inline too, it needs to create a copy anyhow. All in all this smartness will add a lot of code to the Typecast function, for a benefit that is only in special cases achievable.
-
Futures - An alternative to synchronous messaging
Rolf Kalbermatter replied to Daklu's topic in Object-Oriented Programming
I only know Futures from Java, where they are an integral part of the java.util.concurrent package. And the established way in Java for this is a blocking get() method on the Future, with an optional timeout. The Future also has a cancel() method. To code that up without some potential race condition is however not trivial and Sun certainly had a few trials at it, before getting it really working. -
I would almost bet my hat, that that is also what the "Is NaN/Refnum/.." function does. Which makes it likely a similar fast operation as the typecast with following bitmask operation. And because of that I would prefer the explicit and clear use of the Is NaN primitive any time above the much less clear "Advanced Operation" hack, even if it would save a few CPU cycles.
-
Maybe LLVM does that, but I doubt that the original LabVIEW compiler did boolean logic reduction. Even then the difference is likely very small in comparison to the time needed for the Variant case for instance. Two logic operations versus six does not amount to many nanoseconds for sure on modern CPU architectures. And it is clear from a quick glance what asbo did wrong when copying the code. He should have ORed the two isNaN? results, not one of them and the isEqual? result.
-
Actually this is not so much something that an application does as much more something which a CPU does. If the quiet bit is cleared (or set for CPUs that use it a signaling bit) they can cause an FPU Exception. An application can then install an exception handler to catch them for calculation results and turn them into quiet NaNs which the FPU is supposed to pass through as quiet NaN in any further operation in the FPU. For LabVIEW it doesn't really make much difference as LabVIEW simply implements the NaN handling according to IEEE. If it uses the exception support from the CPU or not is irrelevant for the LabVIEW user, as we simply get the results and don't have a mechanisme to further get informed about exceptions. A LabVIEW NaN is explicitedly with all bits set, but it will treat any number whose exponent is all 1 and its mantissa having at least one set bit as NaN.
-
Well I have been using this in the past for debugging communication drivers, using it on the Read/Write function to log all the strings sent and received to see how it went bad at some point and try to understand protocol handling issues. As such it is helpful but not exactly user friendly and I have since usually added in such cases a direct debug logger that takes the interesting parameters and writes a string line to a text file on disk. It means writing specific code for each case but is a lot easier to parse and analyze later on than manually stepping through each call iteration. If there would be a method to hook a log VI to a VI instance (or its class, I wonder if the Class Operator might be possible for that and I'm not talking directly about LVOOP but the App method App->Get/Set Class Operator) then that might be more helpful. Such a logger VI would probably receive a data structure that contains a copy/reference to the actual conpane data and would have a pre and post execution call. The Enable Database feature has been discussed in some LabVIEW classes back in the old days (15 to 20 years ago) but has gone into the land of forgettingness since.
-
Well, that doesn't say if it is guaranteed that each of those produces the LabVIEW canonical NaN number which is 0xFFFFFFFF. IEEE only specifies that any number with an exponent where all bits are set, and a mantissa with at least one bit set equals NaN. The sign bit is irrelevant for NaN. All exponent bits set and the mantissa is 0 means Infinitive, and here the sign bit has of course a meaning. So any number resulting in bit pattern s111 1111 1qxx xxxx xxxx xxxx xxxx xxxx (for 32 bit float) and with at least one bit of x set can be NaN. The s bit is the sign bit which for NaN is not relevant and the q bit is the quiet bit, which should indicate if the NaN will cause an exception or not (for most processors a set bit will be a silent or quiet NaN).
-
Actually this is not going to fly well. While LabVIEW does use a so called canonical bit pattern to indicate NaN, NaN is defined by IEEE in such a way that an entire range of bit patterns result in a NaN value and LabVIEW does recognize this correctly when looking at a floating point numeric. So depending where your NaN comes from they might be each time NaN but not contain the same bit pattern.
-
can PNG images have different format?
Rolf Kalbermatter replied to vivante's topic in Machine Vision and Imaging
Well, I'm pretty sure they use the libpng kernel for the PNG support in both cases. However as you have certainly found out the interface to libpng is not simple and the possible formats are not trivial to map to a pixel based image format like NI is using. In fact some of the features of PNG are impossible to maintain with a single 2D pixmap and an optional alpha channel. To support memory streams requires quite a bit more work too as libpng supports accessing files directly but requires extra work to support memory streams instead. So it's very easy to take a few shortcuts in one case and end up with less support for the more exotic PNG formats than when using file based input/output.- 14 replies
-
- png
- image decoding
-
(and 1 more)
Tagged with:
-
Troubleshooting Latency On The Serial Bus
Rolf Kalbermatter replied to mje's topic in LabVIEW General
Todd, if you had read those 921600 bps with one VISA Read per received character it sure enough would have slowed you down There is definitely an overhead with VISA calls, as they go through several software layers of abstraction so minimizing the number of VISA calls definitely is a good idea. I have to say that 12ms per VISA Read does sound a little excessive though. I'm sure I have seen less on computers that were a lot less powerful than nowadays average low cost computer. -
Considering that Heap Peak is a totally unsupported, officially non-existing feature in LabVIEW, your thinking is clearly VERY wrong. Most people, including many working within NI, don't know about it and those who know only will admit it, when forced with a gun at their head . Besides as asbo has already hinted to, explaining yourself is always a good idea if you hope to get help from others. These fora are in fact based on almost exclusively voluntary efforts and if I have to start to guess what someone might mean, I'm and most others are very quick to dismiss the post and move to something more interesting. There are simply to many questions in the world to spend time on everyone.
-
Personally I would prefer if this setting was also available in the Build Specification Properties (and would overwrite a default setting in the Project Properties) than being able to access them in the diagram in any way. Currently I have to make different project files (or use the Pre/Post Build VIs to do some undocumented magic to change those symbols) if I want to have different builds depending on such a symbol. This shouldn't be necessary at all if the conditional symbols idea was properly implemented for the project and not only added as a sort afterthought. I do understand that this can get messy to implement with some settings inheriting from other places and overwriting them in a clear and consistent manner, but that is what I expect from a modern project management tool. And I have refused to use some Microsoft Visual Studio versions because they had some weird inheritance problems with project settings.
-
VI Modification Bitset - Bit interpretation
Rolf Kalbermatter replied to Bernd's topic in LabVIEW General
While I have some basic information about this in earlier LabVIEW versions (v5), I'm hesitant to publish that since there are certainly differences in LabVIEW versions. I don't have the details about this but I remember having to abandon some code in newer versions that was relying on these bits having certain meanings. -
How to Programmatically reset WLS Ni-9191
Rolf Kalbermatter replied to Ano Ano's topic in LabVIEW General
I haven't a lot of experience with the DAQmx networked devices but what I have found was that my LabVIEW application sometimes could access a CompactDAQ chassis, sometimes only after I had accessed the device in MAX and sometimes not at all. This was not an option for the application that needed to be able to startup and access the hardware with no user interaction at all, when starting up the computer. Adding a little code that checks if the DAQmx device was a networked device and calling DAQmx Reserve Network Device.vi on before initializing the DAQmx task in that case has solved that completely and always lets my application startup and access the hardware without having to go into MAX at all. I'm pretty sure that this function call does internally also a device reset, which is no problem for me, and seems to be what you want too. -
How to Programmatically reset WLS Ni-9191
Rolf Kalbermatter replied to Ano Ano's topic in LabVIEW General
Do you call the DAQmx Reserve Network Device.vi in your LabVIEW code? -
No VLC could not be aware of anything besides the HWND boundaries at all, since everything inside a VI front panel is LabVIEW specific and not visible to anything that only knows about Windows details. It would be definitely a case of a VI panel being used for nothing else but a HWND container and there is no sensible way of sharing that with any LabVIEW controls or whatsoever in the same Window. It is even worse than that, as VLC can not know if you overlay that panel by something else in LabVIEW. On the other hand VLC does not have to worry about blitting only within the Windows bounderies as long as it uses GDI or friends for it. Windows will make sure that any blitting will only occur in the window boundary no matter what. That changes of course if VLC would do direct blitting into graphics frame buffers, but as far as I know this is strongly discouraged by Windows and actually made very hard since it requires hardware specific code which is able to deal with various graphics card drivers.
-
Rendering is obviously an issue, but ActiveX is not the solution here, as it is at best a "briccolage" as the French say. The most simple way would involve some copying of the data into a LabVIEW Indicator like the Picture Control. Not ideal in terms of performance but at least doable in any LabVIEW platform without issues about overlapping and such. I would love to be able to create my own control like the IMAQ control, but alas that is not an option for us mere mortals as it requires to much NI internal only knowledge about LabVIEW. The second best would be to use a VI panel as a sub panel and reserve the entire VI as render surface. Extract the platform graphics port (HWND on Windows) and pass it to VLC to let it render its output directly into it. there will be issues with overlap of other elements with that Window but ActiveX has that too, and the only way to get that right is to hook into the internal LabVIEW object messaging which is not an option without serious access to at least some parts of the LabVIEW interna.
-
Writing ActiveX Controls if there are other solutions is only for real masochists . Go the DLL path, that gives you a lot more control in the debugging process, avoids an entire level of obscure Windows 3.1 imposed limitations, and a few more levels of intermediate bullshit, and last but not least if done with a little planning in mind, you can port it to every single platform that LabVIEW and VLC support without to much effort.
-
VI Modification Bitset - Bit interpretation
Rolf Kalbermatter replied to Bernd's topic in LabVIEW General
The meaning of this bitset is private to LabVIEW and undocumented. As such it is also highly susceptible to changes and completely incompatible reeinterpretations between LabVIEW versions. Some of the more trivial modifications like front panel or diagram changes can be quite easily deduced but there are many obscure modifications in LabVIEW that get recorded in this bitset that nobody without access to the LabVIEW source code could ever guess. Try to look in the VI Properties->Current Changes dialog and see what it mentions there. You might be able to deduce some of these bitflags from there, but quite a few bitflags are lumped together in more common modification reasons. -
What is the Ch interpreter?
Rolf Kalbermatter replied to Phillip Brooks's topic in Calling External Code
Well the approach chosen in that paper is rather cumbersome. It means you have to create a CIN or DLL for every operation you want to perform, with an external filename whose name is compiled into this external code. Yes it gives the flexibility to change the actual Ch code by changing the external file contents, but it is highly coupled in a logical way and highly decoupled in a process way, in fact the total opposite you would want with a scripting solution. Using an interpreted C environment in a way that requires to write a VI that interfaces to compiled C code for every individual problem is quite an involved way. That may work if you have a specific problem to solve, whose algorithme needs fine tuning in the field, but it doesn't feel like a general purpose approach to integrating scripting into LabVIEW, or any programming environment actually. -
c series module 9205 erratic analog readings
Rolf Kalbermatter replied to KWaris's topic in Hardware
Well a circuit always needs to have some reference somehow. For isolated modules like this it means that either your sensor or the analog input needs a reference to some form of GND. This can be the GND of the power supply to power your cRIO. Without ground reference, the entire circuit is floating and static charges eventually can get it to a level that exceeds the common mode range of the input amplifier, resulting in saturation and according misbehaviour of the amplifier. The advantages of isolated amplifiers is that they can measure signals that are referenced to a different potential than the actual measurement system uses (of course the difference of reference needs to be lower than the isolation voltage or you get a breakdown). In case the measurement signal is not referenced in any way you need to provide a reference yourself. This may seem like using an expensive isolation amplifier and making that isolation undone by the common reference, but the isolation amplifier still serves as protection for your measurement device (and the operator sitting at the computer connected to it) from high voltage surges such as what is caused by lightning, provided the ground reference is strong enough to dissipate those surges.