Jump to content

szalusky

Members
  • Posts

    3
  • Joined

  • Last visited

Everything posted by szalusky

  1. Unfortunately 64-bit LabVIEW isn't an option in the near term (we have some NI motion dependencies which are not 64-bit compatible). In the future we'd like to break this part of the application out (maybe create a proxy application or move to an embedded solution) but in the mean time we are stuck with 32-bit labVIEW. I've thought about allowing some dynamic creation of images in addition to the pool so that during times of high load we do not have parts of the application blocked waiting for resources to be freed elsewhere. In this circumstance when the reference is "released" back to the manager it will be destroyed. Even if this is allowed, it should still cut down on the total number of images created/destroyed in the application (under most circumstances the resources available in the pool will be re-used). We are working on putting more memory monitoring in place to better understand the circumstances under which the out-of-memory errors occur. As of yet, the memory appears to be more or less stabilized (with plenty of total memory available) when the errors occur. Also, once the error occurs initially all subsequent requests to allocate the necessary memory generate out-of-memory errors - we have to re-start the application to fix the problem.
  2. I have a relatively large application which does heavy IMAQ image processing. Many IMAQ buffers are created and destroyed dynamically at runtime based on need (i.e. different system components will create their own temporary images for processing steps and destroy them when their work is finished). While this dynamic creation/destruction reduces the overall memory footprint at a given time, it appears the constant re-sizing or re-creating is causing memory fragmentation over time. When this dynamic creation/destruction is used heavily, we get IMAQ out of memory errors (at different places in the application) after the system has been running for some time. Total memory usage has not increased (and is well below 2GB) so the errors are presumably due to the fact that there are not large enough contiguous blocks available. Are there any best practices or “standard” ways for managing lots of image references? Should they all be created initially? One idea is to allocate a pool of resources on startup (of images that will not get resized) that are shared throughout the application using a mechanism to “reserve” and “release” the resources in the pool. Is this a good approach? Thanks!
  3. Hello Jimmy, I have also experienced the issues you describe. So far the only work-around we have is to add extra elements to the cluster (as you say) to give the data structure a "unique" signature. This is fine in our interop because it is relatively small, but this strategy would quickly become unmanageable creating a larger interop assembly if you have to manually check for these kinds of collisions. Also.. we have found no indication from the build when this data structure "consolidation" has taken place. Be wary that this can result in especially *fun* behavior if you are unfortunate enough to use a custom data structure which matches the LabVIEW error cluster. We have had one case where some data structures disappeared entirely because they were being interpreted as the error cluster.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.