Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. Actually this is not so much something that an application does as much more something which a CPU does. If the quiet bit is cleared (or set for CPUs that use it a signaling bit) they can cause an FPU Exception. An application can then install an exception handler to catch them for calculation results and turn them into quiet NaNs which the FPU is supposed to pass through as quiet NaN in any further operation in the FPU. For LabVIEW it doesn't really make much difference as LabVIEW simply implements the NaN handling according to IEEE. If it uses the exception support from the CPU or not is irrelevant for the LabVIEW user, as we simply get the results and don't have a mechanisme to further get informed about exceptions. A LabVIEW NaN is explicitedly with all bits set, but it will treat any number whose exponent is all 1 and its mantissa having at least one set bit as NaN.
  2. Well I have been using this in the past for debugging communication drivers, using it on the Read/Write function to log all the strings sent and received to see how it went bad at some point and try to understand protocol handling issues. As such it is helpful but not exactly user friendly and I have since usually added in such cases a direct debug logger that takes the interesting parameters and writes a string line to a text file on disk. It means writing specific code for each case but is a lot easier to parse and analyze later on than manually stepping through each call iteration. If there would be a method to hook a log VI to a VI instance (or its class, I wonder if the Class Operator might be possible for that and I'm not talking directly about LVOOP but the App method App->Get/Set Class Operator) then that might be more helpful. Such a logger VI would probably receive a data structure that contains a copy/reference to the actual conpane data and would have a pre and post execution call. The Enable Database feature has been discussed in some LabVIEW classes back in the old days (15 to 20 years ago) but has gone into the land of forgettingness since.
  3. Well, that doesn't say if it is guaranteed that each of those produces the LabVIEW canonical NaN number which is 0xFFFFFFFF. IEEE only specifies that any number with an exponent where all bits are set, and a mantissa with at least one bit set equals NaN. The sign bit is irrelevant for NaN. All exponent bits set and the mantissa is 0 means Infinitive, and here the sign bit has of course a meaning. So any number resulting in bit pattern s111 1111 1qxx xxxx xxxx xxxx xxxx xxxx (for 32 bit float) and with at least one bit of x set can be NaN. The s bit is the sign bit which for NaN is not relevant and the q bit is the quiet bit, which should indicate if the NaN will cause an exception or not (for most processors a set bit will be a silent or quiet NaN).
  4. Actually this is not going to fly well. While LabVIEW does use a so called canonical bit pattern to indicate NaN, NaN is defined by IEEE in such a way that an entire range of bit patterns result in a NaN value and LabVIEW does recognize this correctly when looking at a floating point numeric. So depending where your NaN comes from they might be each time NaN but not contain the same bit pattern.
  5. Well, I'm pretty sure they use the libpng kernel for the PNG support in both cases. However as you have certainly found out the interface to libpng is not simple and the possible formats are not trivial to map to a pixel based image format like NI is using. In fact some of the features of PNG are impossible to maintain with a single 2D pixmap and an optional alpha channel. To support memory streams requires quite a bit more work too as libpng supports accessing files directly but requires extra work to support memory streams instead. So it's very easy to take a few shortcuts in one case and end up with less support for the more exotic PNG formats than when using file based input/output.
  6. Todd, if you had read those 921600 bps with one VISA Read per received character it sure enough would have slowed you down There is definitely an overhead with VISA calls, as they go through several software layers of abstraction so minimizing the number of VISA calls definitely is a good idea. I have to say that 12ms per VISA Read does sound a little excessive though. I'm sure I have seen less on computers that were a lot less powerful than nowadays average low cost computer.
  7. Considering that Heap Peak is a totally unsupported, officially non-existing feature in LabVIEW, your thinking is clearly VERY wrong. Most people, including many working within NI, don't know about it and those who know only will admit it, when forced with a gun at their head . Besides as asbo has already hinted to, explaining yourself is always a good idea if you hope to get help from others. These fora are in fact based on almost exclusively voluntary efforts and if I have to start to guess what someone might mean, I'm and most others are very quick to dismiss the post and move to something more interesting. There are simply to many questions in the world to spend time on everyone.
  8. Personally I would prefer if this setting was also available in the Build Specification Properties (and would overwrite a default setting in the Project Properties) than being able to access them in the diagram in any way. Currently I have to make different project files (or use the Pre/Post Build VIs to do some undocumented magic to change those symbols) if I want to have different builds depending on such a symbol. This shouldn't be necessary at all if the conditional symbols idea was properly implemented for the project and not only added as a sort afterthought. I do understand that this can get messy to implement with some settings inheriting from other places and overwriting them in a clear and consistent manner, but that is what I expect from a modern project management tool. And I have refused to use some Microsoft Visual Studio versions because they had some weird inheritance problems with project settings.
  9. While I have some basic information about this in earlier LabVIEW versions (v5), I'm hesitant to publish that since there are certainly differences in LabVIEW versions. I don't have the details about this but I remember having to abandon some code in newer versions that was relying on these bits having certain meanings.
  10. I haven't a lot of experience with the DAQmx networked devices but what I have found was that my LabVIEW application sometimes could access a CompactDAQ chassis, sometimes only after I had accessed the device in MAX and sometimes not at all. This was not an option for the application that needed to be able to startup and access the hardware with no user interaction at all, when starting up the computer. Adding a little code that checks if the DAQmx device was a networked device and calling DAQmx Reserve Network Device.vi on before initializing the DAQmx task in that case has solved that completely and always lets my application startup and access the hardware without having to go into MAX at all. I'm pretty sure that this function call does internally also a device reset, which is no problem for me, and seems to be what you want too.
  11. Do you call the DAQmx Reserve Network Device.vi in your LabVIEW code?
  12. No VLC could not be aware of anything besides the HWND boundaries at all, since everything inside a VI front panel is LabVIEW specific and not visible to anything that only knows about Windows details. It would be definitely a case of a VI panel being used for nothing else but a HWND container and there is no sensible way of sharing that with any LabVIEW controls or whatsoever in the same Window. It is even worse than that, as VLC can not know if you overlay that panel by something else in LabVIEW. On the other hand VLC does not have to worry about blitting only within the Windows bounderies as long as it uses GDI or friends for it. Windows will make sure that any blitting will only occur in the window boundary no matter what. That changes of course if VLC would do direct blitting into graphics frame buffers, but as far as I know this is strongly discouraged by Windows and actually made very hard since it requires hardware specific code which is able to deal with various graphics card drivers.
  13. Rendering is obviously an issue, but ActiveX is not the solution here, as it is at best a "briccolage" as the French say. The most simple way would involve some copying of the data into a LabVIEW Indicator like the Picture Control. Not ideal in terms of performance but at least doable in any LabVIEW platform without issues about overlapping and such. I would love to be able to create my own control like the IMAQ control, but alas that is not an option for us mere mortals as it requires to much NI internal only knowledge about LabVIEW. The second best would be to use a VI panel as a sub panel and reserve the entire VI as render surface. Extract the platform graphics port (HWND on Windows) and pass it to VLC to let it render its output directly into it. there will be issues with overlap of other elements with that Window but ActiveX has that too, and the only way to get that right is to hook into the internal LabVIEW object messaging which is not an option without serious access to at least some parts of the LabVIEW interna.
  14. Writing ActiveX Controls if there are other solutions is only for real masochists . Go the DLL path, that gives you a lot more control in the debugging process, avoids an entire level of obscure Windows 3.1 imposed limitations, and a few more levels of intermediate bullshit, and last but not least if done with a little planning in mind, you can port it to every single platform that LabVIEW and VLC support without to much effort.
  15. The meaning of this bitset is private to LabVIEW and undocumented. As such it is also highly susceptible to changes and completely incompatible reeinterpretations between LabVIEW versions. Some of the more trivial modifications like front panel or diagram changes can be quite easily deduced but there are many obscure modifications in LabVIEW that get recorded in this bitset that nobody without access to the LabVIEW source code could ever guess. Try to look in the VI Properties->Current Changes dialog and see what it mentions there. You might be able to deduce some of these bitflags from there, but quite a few bitflags are lumped together in more common modification reasons.
  16. Well the approach chosen in that paper is rather cumbersome. It means you have to create a CIN or DLL for every operation you want to perform, with an external filename whose name is compiled into this external code. Yes it gives the flexibility to change the actual Ch code by changing the external file contents, but it is highly coupled in a logical way and highly decoupled in a process way, in fact the total opposite you would want with a scripting solution. Using an interpreted C environment in a way that requires to write a VI that interfaces to compiled C code for every individual problem is quite an involved way. That may work if you have a specific problem to solve, whose algorithme needs fine tuning in the field, but it doesn't feel like a general purpose approach to integrating scripting into LabVIEW, or any programming environment actually.
  17. Well a circuit always needs to have some reference somehow. For isolated modules like this it means that either your sensor or the analog input needs a reference to some form of GND. This can be the GND of the power supply to power your cRIO. Without ground reference, the entire circuit is floating and static charges eventually can get it to a level that exceeds the common mode range of the input amplifier, resulting in saturation and according misbehaviour of the amplifier. The advantages of isolated amplifiers is that they can measure signals that are referenced to a different potential than the actual measurement system uses (of course the difference of reference needs to be lower than the isolation voltage or you get a breakdown). In case the measurement signal is not referenced in any way you need to provide a reference yourself. This may seem like using an expensive isolation amplifier and making that isolation undone by the common reference, but the isolation amplifier still serves as protection for your measurement device (and the operator sitting at the computer connected to it) from high voltage surges such as what is caused by lightning, provided the ground reference is strong enough to dissipate those surges.
  18. I guess it could be integrated in LabVIEW in a similar way than Lua through LuaVIEW. But as current maintainer of LuaVIEW I don't see much merit in getting my time fragmented even more with yet another scripting interface for LabVIEW. The wiki page certainly looks problematic and the license doesn't make it the first choice for a scripting interface either.
  19. Why in all of the world do you want to use Windows messages to communicate between two VIs? This has only disadvantages, from being a platform specific solution, to making everything rather complicated, to being of low performance!
  20. Basically your Windows desktop is simply an application that is started up by Windows after the user logged in (or it logs in automatically). You can change the registry entry for this to any program you like, including a LabVIEW app. Needs some careful planning ahead, because once you do that, you mostly only can do things in Windows that your application provides an interface for. So if you don't plan some way to startup for instance the file manager, you might have locked out yourself for that account pretty effectively. One exception is Ctrl-Alt-Del which still works, but with some Windows API magic, this is quite easily remedied too. A computer that is tied down like this is pretty hard to get into in other ways than your Shell replacement app, but again watch out, this applies for you too, not just the operator noob.
  21. It's the low level device name notion for modern Windows systems before any disk drives are mapped on top of it, so I doubt that it is something LabVIEW does on its own explicitedly. Somehow somewhere Windows seems to have mount points or whatever that point to a hard-drive partition that doesn't currently or anymore exist on your system, and maybe LabVIEW tries to enumerate those disk drives or mount points at some point and triggers this error message. Try to do a registry search for the HD volume name, maybe it exists in there and may even point you more into a direction what could have caused this. Have you at some point used removable harddisks with NTFS formatting?
  22. Thanks for the clarification. In that case it seems like a good solution unless the directory rights silently get modified too. I should test that but am currently tied up with quite a bit of other things.
  23. Incidentally exactly the error codes I mentioned in my reply that you can't just always honor or ignore, as it depends on the situation where they occur, what you should do about. Timeout means usually that there simply hasn't been data and you should retry after a reasonable amount of time whatever you tried to do that gave you a timeout. This can be a Read, or a Connect or a Write operation. You should build some retry limit into it as it makes usually little sense to try endlessly. If after several minutes there is still no peer to connect, there might be a bigger problem like a disconnected network cable. Peer disconnected is another error that can happen because of network failure and that you can handle totally transparent by closing your network connection and attempting to reconnect again. Last but not least you should of course consider the timeouts you use for the various functions. When you do a connect with a 120 second timeout the connect will wait that long likely preventing your application to quit when you hit the quit button until it gets the requested operation (connection for a connect or data for a read), encounters and error or the the timeout occurred. This is probably the reason that you believe that you can only Ctrl-Alt-Del your application as the network functions simply sit in a timeout waiting for something. One thing that usually works to terminate most network functions in LabVIEW is to actually Close the network refnum that they operate on. This is not really good programming for normal network refnums and might fail sometimes but the perfect way to terminate the listener loop, when you would have a TCP Listener somewhere in your program. So wherever you handle your Application close request, get a handle on those network refnums and close them. That "should" make any network operation that is waiting on that refnum to return with an error. Better would be to make your network communication use much smaller timeouts and handle the close request themselves by polling a global close state controlled by your application close handler or if you start to do real software design some day , use some producer consumer framework throughout your application to handle those 10 loops correctly and in a fully controlled manner.
  24. What messages? How does your code look? Is auto error handling the only error handling you do in your code? Network communication is not something that you can simply expect to work always. Your code needs to be able to handle all kinds of possible errors such as timeouts, the peer disconnecting etc. in order to get a reliable operation. Depending on the error and your application you can sometimes ignore it (read timeouts for instance) or should close the connection and attempt to reconnect (client) or wait for another connection attempt (server).
  25. Haven't checked out how they did the silver controls, but being graphics and all I have a suspicion that it is either a Video Driver or maybe even specifically a GPU issue. Can you see if the different revision motherboards use some kind of different chipset, or at least revision thereof. What about the video driver?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.