Jump to content

asbo

Members
  • Posts

    1,256
  • Joined

  • Last visited

  • Days Won

    29

Posts posted by asbo

  1. More fundamentally, you might want to add some code to profile intensive sections of the software. You don't include any information to its specific function, so I can't recommend anything to you off-hand. You should have intimate knowledge of what your process will be doing at the times when you're grabbing these figures, so analyzing those routines should tell you what some potential sources might be.

    I wouldn't really expect the change in CPU usage you see between versions, but it could be that some nodes/VIs you're depending on have changed, so reviewing your code (especially when jumped 3.5 major versions) would be prudent.

    One thing to do would be swap back to source code and use the built-in profiler to see which VIs are racking up the most clock time. (Dan already suggested this)

  2. As long as you use exactly the same name to obtain the notifier, it's the same notifier regardless of which VI obtains it. Unless you pass the notifier refnum around, though, there's no way to only obtain it in one place. For every time you obtain the notifier, you should release it; LabVIEW maintains an internal counter of how many times it has been opened/closed and only when that counter reaches zero can those resources be freed.

    • Like 1
  3. We've discussed that if the password protection becomes insufficient generally, we might change to shipping these as built DLLs, so the VIs won't even exist on disk. That may be the better thing to do so there isn't "just a password" standing between users and the diagrams.

    Whether it's worth the effort to change over, I can't say, but it would certainly be a more apt solution. I can imagine how frustrating it is to get support requests like that.. it'd be like me swapping some hoses around in the engine bay, bringing it back to the dealership, and saying, "Hey, your thing broke."

    • Like 1
  4. b) there's really nothing inside other than a call library node and locking such trivial diagrams actually makes things easier to work with.

    Can you expand on this? I'm not making the connection as to why it would be easier.

    For me, it's never about not trusting code I can't read - all of us do that all the time, it's practically unavoidable. It's more about knowing that there's something I could read and there's just one password between me and it.

  5. The problem with comparing your two memory scenarios is that we really have no idea how it handles each. It's easy to theorize what it should be doing and convince ourselves what it's probably doing, but that's not always the case. Usually, it does make more sense to just test it out and see what happens.

    One consideration is that the memory might be held by whatever library is used for TDMS. I assume you're flushing/closing your file correctly, but the reason it persists after closing the specific VI is that the memory is not used by a LabVIEW data structure, but one from the TDMS library.

    As for your attachment, I've had very good luck in the past compressing TDMS files, but I don't remember if this board allows files with a .zip extension. It might be more advantageous to see your test case, anyway.

  6. Remember that LV does its own memory management under the hood. In general, it is lazy about freeing large chunks of memory it has already allocated from the system. Unless the subVIs run simultaneously, I would expect the memory block to be re-used amongst them, unless there is a significant amount of data that is being passed out which would justify to keeping the allocation. In that case, it just means you don't have enough memory for what you're trying to do (or alternatively that there may be a more efficient way). There is a "Request Deallocation" node you can try, but I don't tend to put much stock in it. Said another way, I trust LV to handle its own memory.

    Based on your phrasing, though, this seems like a premature optimization - write a test case you think might cause out-of-memory issues and see if it actually happens.

  7. You are probably still the poster boy since we had to use another buddy in order to find that the first buddy make a mistake. I'm just curious what your code originally looked like in order to get the results you did.

    It's in the opposite case of the diagram disable structure, so:

    post-13461-0-81584200-1339022129.png

    The answer is that I wired the "value" input's "Not a Number?" node to the NOR instead of the AND. Alas...

  8. What kind of host would you be using for this kind of module; cRIO, EtherCAT (NI-9144), Ethernet (NI-9148) or??? The reason I ask is to understand what kind of communication a Switch driver would have to use.

    In the past, we have been known to use cDAQ chassis of both the USB and Ethernet variety, and I expect that's how we would use these modules. Are you primarily targeting the cRIO platform?

  9. So any number resulting in bit pattern s111 1111 1qxx xxxx xxxx xxxx xxxx xxxx (for 32 bit float) and with at least one bit of x set can be NaN. The s bit is the sign bit which for NaN is not relevant and the q bit is the quiet bit, which should indicate if the NaN will cause an exception or not (for most processors a set bit will be a silent or quiet NaN).

    Just as trivia, LabVIEW does not differentiate quiet NaNs - I remember finding this out after parsing a dataset which did require differentiation.

  10. Oops, forgot the picture.

    post-107-0-12691300-1337889446_thumb.png

    This is for a virtual com port, though. It's in the settings because of that particular VCP's DLL.

    Dinking was definitely the idea.

    Ahh, I recognize that panel. I have a FTDI RS232-3V3 adapter and I was a little slack-jawed at how much stuff you can tweak, especially compared to my built-in one.

  11. What API calls do you think are relevant? Rather than messing with that, is it possible to put your devices on separate buses and use individual COM ports? That would give you some parallelism which may combat the constant time delay.

    The unfortunate fact is that even if you could bump up to 150k baud or higher, you're still only picking at < 20% of your overall iteration duration.

  12. Interesting idea. So I've set things up with a fixed outbound packet size of 13 bytes and a reply of 4 bytes. I can add a delay of up to 5 ms between send/receive primitives without noticing any change in the rate at which data frames arrive. This makes sense since it should take ~4-5 ms to move the 17 bytes. Delays longer than 5 ms add linearly to the observed time between data frames. To me this means my device is responding very quickly.

    Cool, that's kind of what I was expecting to see (though the 30ms I mentioned obviously includes the write duration as well, oops). The obvious approach is to increase the devices' baud rate, but if it were so easy, I suspect you would not be posting. ;)

    To follow up on what Todd said, some (most? all? few?) COM ports have some hardware/buffer settings you can dink with. Check out the attached, maybe your COM port is mal-adjusted.

    post-13461-0-34646500-1337884806_thumb.p

  13. If you add a 30ms delay between read and write and then time across the VISA read, you should be able to measure the latency of the read call itself, as all the data will already be at the port.

    Alternatively, use the read node in a loop, reading only one byte per iteration. This should allow you to see how long it takes to read each byte.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.