Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 01/06/2015 in all areas

  1. Hi guys; Check out what we are doing with LabVIEW and Arduinos. http://www.tsxperts.com/arduino-compiler-for-labview/ We created an actual LabVIEW compiler for Arduino targets that allows Arduinos to be programmed in LabVIEW. Figured to share this with the community. Cheers Filipe
    1 point
  2. I'm not sure I fully understand your setup, but if you have a .NET assembly compiled as a dll, you should be able to call that using the normal labview .NET interface on the connectivity pallet. Checkout https://decibel.ni.com/content/docs/DOC-9138 for a quick example.
    1 point
  3. It is. Each object is basically a pointer, either to the class default data or to the data of the specific instance (it's actually more than one pointer, but that's not really relevant), so the contiguous space required for the array should be N*pointer size. This information is documented in the LVOOP white paper.
    1 point
  4. I expect arrays of objects have a level of indirection. By this I mean I don't think contiguous address space is required to store each element because the type (and size) of each element can't be pre-determined can change. You would require contiguous space for the array of pointers/handles/references or however LabVIEW handles its indirection, but that should be trivial. So if you have enough memory to store those objects by themselves at the same time, I don't expect that they're in an array to be any different. Also note that I said "expect"-- I don't know for certain if this is how LabVIEW operates but I can't see it being able to be done any other way with dynamic types.
    1 point
  5. Pardon the book, but let me try to clarify some concepts here. The question of how much memory was free on the machine running the test is irrelevant. All desktop operating systems use virtual memory so each process can allocate up to its address space limit regardless of the amount of physical RAM in the machine. The amount of physical RAM only affects the speed at which the processes can allocate that memory. If RAM is available, then allocation happens fast. If RAM is not available, then some part of the RAM content must be written to disk so that the RAM can be used for the new allocation. Since the disk is much slower than RAM, that makes the allocation take longer. The key is this only affect speed not how much allocation is required to hit the out of memory error. Just because the task manager still says LabVIEW is using a bunch of memory doesn't mean that LabVIEW didn't free your data when your VI stopped running. LabVIEW uses a suballocator for a lot of its memory. This means we allocate large blocks from the operating system, then hand those out in our code as smaller blocks. The tracking of those smaller blocks is not visible to the operating system. Even if we know that all those small blocks are free and available for reuse, the operating system still reports a number based on the large allocations. This is why even though the task manager memory usage is high after the first run of the VI, the second run can still run about the same number of iterations without the task manager memory usage changing much. Since the amount of memory LabVIEW can allocate is based on its address space (not physical memory), why can't it always allocate up to the 4GB address space of a 32-bit pointer? This is because Windows puts further limitations on the address space. Normally Windows keeps the top half of the address space for itself. This is partially to increase compatibility because a lot of applications treat pointers as signed integers and the integer being negative causes problems. In addition to that the EXE and any DLLs loaded use space in the address space. For LabVIEW this typically means that about 1.7 GB is all the address space we can hope to use. If you have a special option turned on in Windows and the application has a flag set to say they can handle it, Windows allows processes access to 3GB of address space instead of only 2 so you can go a little higher. Running one of these applications on 64-bit Windows allows closer to the entire 4GB address space because Windows puts itself above that address. And then of course running 64-bit LabVIEW on a 64-bit OS gives way more address space. This is the scenario where physical RAM becomes a factor again because the address space is so much larger than physical RAM and performance becomes the limiting factor rather than actually running out of memory. The last concept I'll mention is fragmentation. This relates to the issue of contiguous memory. You may have a lot of free address space but if it is in a bunch of small pieces, then you are not going to be able to make any large allocations. The sample you showed is pretty much a worst case for fragmentation. As the queue gets more and more elements, we keep allocating larger and larger buffers. But between each of these allocations you are allocating a bunch of small arrays. This means that the address space used for the smaller queue buffers is mixed with the array allocations and there aren't contiguous regions large enough to allocate the larger buffers. Also keep in mind that each time this happens we have to allocate the larger buffer while still holding the last buffer so the data can be copied to the new allocation. This means that we run out of gaps in the address space large enough to hold the queue buffer well before we have actually allocated all the address space for LabVIEW. For your application what this really means is that if you really expect to be able to let the queue get this big and recover, you need to change something. If you think you should be able to have a 200 million element backlog and still recover, then you could allocate the queue to 200 million elements from the start. This avoids the dynamically growing allocations greatly reducing fragmentation and will almost certainly mean you can handle a bigger backlog. The downside is this sets a hard limit on your backlog and could have adverse affects on the amount of address space available to other parts or your program. You could switch to 64-bit LabVIEW on 64-bit Windows. This will pretty much eliminate the address space limits. However, this means that when you get really backed up you may start hitting virtual memory slowdowns so it is even harder to catch up. You can focus on reducing the reasons that cause you to create these large backlogs in the first place. Is it being caused by some synchronous operation that could be made asynchronous?
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.