Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 08/25/2009 in all areas

  1. Option 1: Create a queue, a notifier and a rendezvous. Sender Loop enqueues into the queue. All the receiver loops wait at the rendezvous. Receiver Loop Alpha is special. It dequeues from the queue and sends to the notifier. All the rest of the Receiver Loops wait on the notifier. Every receiver loop does its thing and then goes back around to waiting on the rendezvous. Option 2: Create N + 1 queues, where N is the number of receivers you want. Sender enqueues into Queue Alpha. Receiver Loop Alpha dequeues from Queue Alpha and then enqueues into ALL of the other queues. The other receiver loops dequeue from their respective queues. Option 1 gives you synchronous processing of the messages (all receivers finish with the first message before any receiver starts on the second message). Option 2 gives you asynchronous processing (every loop gets through its messages as fast as it can without regard to how far the other loops have gotten in their list of messages).
    2 points
  2. Play around with this. I found it on The Dark Side.
    1 point
  3. If you can install them in LabVIEW 64bit and can open the VIs without getting a broken arrow then yes they have been obviously recompiled for Windows 64 bit. The LabVIEW Call Library Node can only access DLLs that are specifically compiled for the platform that LabVIEW is itself running on. It does not mean that that is a 100% warranty that there might not be still somewhere 32 bit limits in the IMAQ Visions software, but the DLL itself is certainly 64bit code. Rolf Kalbermatter
    1 point
  4. LabVIEW's typecast is more complex than that. It is in essence a typecast like what you see in C but with the extra twist of byte swapping any multi-byte integer to be in Big Endian format on the byte stream side. I think the problem here is that Unflatten does other things like checking the input string length to be valid and whatever. The implementation of Unflatten is certainly a lot more complex since it has to work with any data type including highly complicated variable sized types of clusters containing variable sized data, containing ...... Typecast on the other hand only works on flat data which excludes any form of clusters containing variable sized data. Possibly Flatten/Unflatten could be improved since little endian conversion on a little endian machine should certainly not take longer than the Typecast and additional byte swap, but the priority for such a performance boost might be rather low, since it would certainly make the implementation of Flatten/Unflatten even more complex and hence more prone to bugs in the implementation. But thanks for showing me that the good old Typecast/Swapping still seems to be the better way than using Flatten/Unflatten with the desired endian setting . The reason for this is that LabVIEW originates from the Mac with its 68000 CPU which was always a big endian CPU. While the later PPCs in the PPC Macs had the option to either use big or little endian as preferred format, Apple choose to use the same big endian format that came from the 68k. When NI ported LabVIEW to Windows (and other architectures like Sparc and PA Risc later) they had to tackle a problem. In order to send binary data to a GPIB device or over the network, one had always used the Typecast or Flatten operator to convert it into the binary string and it would have been very nice if the data sent over the network or written into a binary file by a LabVIEW program on the Mac, could be easily read by a LabVIEW program on Windows. This required the same byte order for flattened data, so the flattened format was specified to be big always endian, independent of the platform LabVIEW is running on. A C typecast will be difficult to do in LabVIEW. Trying to do that with a small external code could be an option but it is quite tricky. It's not enough to simply swap the handles but you also need to adjust the array length in the handle accordingly so a different function for each different integer sizes would be required. Rolf Kalbermatter
    1 point
  5. I don't know if this suits your needs, but you could try out Scilab. It has some sort of LabVIEW interface (similar to script nodes) and it's a lot cheaper too!
    1 point
  6. (raises hand) Pick me! I know this one. As luck would have it I already made a clock program when I was first learning about how to use LabVIEW and its draw features. Here's a clock that works with the OS time, or user specified time. http://brian-hoover....k%20Program.zip I do have a disclaimer. I made this code several years ago and while it works, there are several coding advancements that I did not know about and did not use. First I would have created a state machine, and an event structure instead of polling every 200ms. In any case hope this works for you, here's a screen shot. EDIT: I'm sorry I didn't read the post closely enough, what I provided is not what you are looking for, but it still may be useful.
    1 point
  7. Lovely event nugget Dark side
    1 point
  8. This issue is now resolved. Everyone can report posts now. Admin edit: this post was reported successfully by Ton Plomp with the following text: "This is a test report"
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.