Jump to content

GregR

NI
  • Posts

    47
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by GregR

  1. There are 2 types of ActiveX controls: windowed and windowless. Windowed controls create their own OS window and completely handle drawing inside of that window. Windowless controls get messaged by their containing window when it is their turn to draw. Guess which kind the media player is. In order to draw anything on top of it, we would need to create a partially transparent window on top of it. You could do something with a second VI that is set to be partially transparent and positioned over the media player.
  2. Not all dlls contain .NET metadata so they can't be loaded as .NET assemblies. I believe shell32 would be one of those. You would need to use LoadLibrary from the Win32 API to load it. Also you should just load it by name. The system will locate the appropriate copy.
  3. There are a lot of considerations when deciding which VIs to make reentrant. Its about finding a balance between maximum performance and minimum memory usage. Any VI that maintains state needs to be either non-reentrant or fully reentrant depending on its requirements for that state. If there are any VIs that truly can't be called at the same time, those should stay non-reentrant. This could be things like configuration dialogs or file modification. Non-reentrant VIs are one of the easiest ways to serialize access to single instance resources. Any VI that is part of a performance critical code path probably should be made fully reentrant. This avoids synchronization points between multiple parallel instances of performance critical code or non-performance critical code getting in the way of performance critical code. Beyond that you can start to favor non-reentrant or shared reentrant to reduce memory usage. As crossrulz said, VIs that always execute quickly can be considered for leaving as non-reentrant. Keep in mind that there is a difference between a VI that always executes quickly and one that typically executes quickly. Anything that does asynchronous communication (networking, queues, ...) should be considered slow, because it could take longer than expected. Making VIs that are called from a lot of places shared reentrant instead of fully reentrant will slightly increase execution time but can greatly reduce the number of instances required and thus memory usage.
  4. Remote panel as the name implies only shows the panel on the remote machine. The diagram runs on the server machine. This allows the diagram to still access hardware on the server machine like DAQ devices. The same goes for the Input Device palette. If you want to capture keyboard input from the remote panel, you will need to use front panel events. Those are the only thing that will be redirected from the remote UI back to the diagram. Check out the Key Down event on "This VI".
  5. In 2011 (not sure about 2010) the VI hierarchy window can help draw your dependency map for you. There is a "Group Libraries" button that will put all VIs in a library together. Cycles between your libraries then show up as backwards dashed references. You can even collapse each library to a single item. This does mean you have to load all your VIs into memory to get the full graph, but it works. I wanted this feature way back when we introduced libraries but it took a while to get it.
  6. This is not really a question of the LabVIEW ActiveX interface but really a feature of the language used to call the open reference. In LabVIEW, references are closed when the top level VI stops running (in most cases). Even if you used the LabVIEW ActiveX interface to open a VI reference, that reference would be closed when the VI stops running. This is not because it is a VI reference, but because LabVIEW closes the ActiveX reference. In C or C++, references are only closed by the user explicitly closing. TestStand is written in C++ so this is the case they are in. C# uses garbage collection so a reference would stay open as long as something still remembers the reference. This could be equal to a thread being running if the reference is stored on that thread's stack but more frequently it is independent of threads. It is usually about a method returning which allows its locals to be collected or a static data structure whose value contains the reference.
  7. Typically a VI is reserved by either being run as a top level VI, having a strict VI reference opened or having one of the first 2 done to one of its callers (recursively up the VI hierarchy). In the case of ActiveX, we don't really have a concept of strict references, so we explicitly expose reservation. However reservation has nothing to do with keeping a VI in memory. TestStand does control VI lifetime through references. They open the references from TestStand itself so they are in control of when they close those references. The issue you are having is related to the automatic cleanup of references when a top level VI finishes running. Even if we did expose reservation of references through the VI server, your reference would still be cleaned up and the VI might be unloaded. There are a couple of options. You either have to change how your code operates such that the VI that opens the reference does not stop running. This might mean making a UI where the user can repeatedly press your button to do something rather than having the VI complete and the user hit run again. Or it could mean starting up a separate VI in the background that opens the reference and keeps running until you are really done. [*]If the VI you're loading is meant to stay running, then you can use the "Auto Dispose Ref" option on the Run method. This will cause the reference to stay open as long as the VI you called Run on is running. (The cleanup happens at the end of the VI you called Run on instead of the VI you called Open from.)
  8. EMFs are produced by setting up drawing to target an EMF then calling our rendering code. Currently our rendering code is only accessible from the picture control itself. The result is as you have found, the only way to get an EMF from a picture string is to use a control. There is no purely programmatic way to invoke the code required.
  9. "allowmultipleinstances=true" in the ini file for the exe will do what you want. I still don't understand how the same OCX would allow parallelism in VB but perhaps that is not important if you're happy with this other solution.
  10. This isn't making sense to me. Each ActiveX object defines whether it is an STA or MTA model. LabVIEW is supposed to honor that. If it is STA, we make all calls from our UI thread. If it is MTA, then we allow the calls to happen in any thread. If your object is really MTA and the VI is not set to run in the UI thread, I would expect these calls to happen in parallel. Also your references are named UserControl. All ActiveX controls are STA by definition because all ActiveX UI is STA. If the object is STA, then you shouldn't have been able to run the code in parallel in VB either. Is it true that the ActiveX code is built as a control? Did you use the exact same methods in VB to see the parallel execution or were you calling something similar that really use MTA?
  11. When talking about arrays, it is important to distinguish between copy(noun) and copy(verb). Copy(noun) refers to a memory buffer containing data. Copy(verb) refers to the act of reading memory from one location and writing those values to another location. If you are running out of memory, then you need to focus on the number of buffers allocated. If you want code to run fast, then your main focus should be the number of times you read from one and write to another. In many cases having more buffers means more read/write operations so reducing buffers tends to improve speed but the relationship is indirect. My discussion below refers to the operation of reading from one location and writing to another. LabVIEW's memory manager handles resizes specifically. This means it can try to expand an allocation at its current location before resorting to allocating a new buffer. This also means that in the cases where a new buffer is required, it is the memory manager that copies the existing data to the new location and disposes the old buffer. So from an allocation standpoint, it doesn't matter if a new element is being added to the beginning, middle or end of an array. The chance of it causing a copy of every existing element is the same. After the allocation is done, then we can actually have enough space to add the new element. This is where the location matters. If you're adding to the beginning, we will copy every existing element to move it down. If you're adding to the end, we just have to set the new element. Going back to the original build array scenario. This means that prepending an element with build array will copy the existing elements at least once and commonly twice. Appending an element with build array will either not copy or copy once. That makes appending always one less copy and that qualifies as "much more efficient" to me. When LabVIEW shrinks an array, we do things in the opposite order but the same principles apply. Since we won't have enough room for all the data after resizing, we must move the data we want to the front before resizing. When deleting from the beginning, this means copying everything else. When deleting from the end, this requires nothing. We then call the memory manager. The odds are greater that the memory manager will keep the same buffer when shrinking, but there are still times when it won't so it must copy all the data to the new location. Regarding delete from array vs array subset, delete from array is more expensive. Because delete from array has to handle cases where you delete from the middle, it doesn't produce a subarray. Array subset and split array always produce subarrays. This can reduce the overall number of copies, or it might just mean that the copy happens at the next node and the net result is no different.
  12. That would work for a graph. However, on a chart the X is calculated based on the amount of data. To reset it you have to reset the data. Writing an empty array to the "History Data" property will accomplish this. http://zone.ni.com/reference/en-XX/help/371361H-01/lvprop/wvfrmchart_hist_dat/
  13. LabVIEW (at least internally) refers to those as tip strips. The editor does not provide any way to disable them.
  14. When I say the queue buffer contains all the elements, that just means the top level of the data. For arrays that is just the handle. In your example I see it get close to 1.5 million elements. This means the queue buffer is only around 6MB. You actually seem to be off on the total memory calculation though. Each of your 128 uInt64 arrays is about 1K. That means that 1.5 million is 1.5GB. That puts you very close to the 1.7GB of usable address space and much higher than your estimated 150MB. I hadn't actually built the VI when I replied the first time so my focus on fragmentation was based on the low 150MB number. This appears to be more about actual usage than fragmentation. If you want to see what happens when the data really is flat inside the queue buffer, try putting an array to cluster after your initialize array and set the cluster size to 128. This produces the same amount of data as the array but it will be directly in the queue buffer. You will get a much smaller number of elements before stopping. The out of memory dialog is displayed by LabVIEW not by the OS. The problem is this dialog is triggered at a low level inside our memory manager and at that point we don't know if the caller is going to correctly report the error or not. So we favor given redundant notifications over possibly giving no notification. This does get in the way of programmatically handling out of memory errors, but this is often quite difficult because anything you do in code might cause further allocation and we already know memory is limited. I guess I forget that a lot of people limit their virtual memory size. This does affect my earlier comments about the amount of usable address space available to each process. This does put a limit on total allocations across all processes, so the amount available to any one process is hard to predict. Vision was the first to be supported on 64-bit because it was seen as the most memory constrained. Images are just big and it is easy to need more memory than 32-bit LV allows. Beyond that it is just a matter of getting it prioritized. Personally, I'd like to see parity or even 64-bit taking the lead. As sales and marketing continue to hear the request and we see users using 64-bit OSs, we should get there.
  15. Pardon the book, but let me try to clarify some concepts here. The question of how much memory was free on the machine running the test is irrelevant. All desktop operating systems use virtual memory so each process can allocate up to its address space limit regardless of the amount of physical RAM in the machine. The amount of physical RAM only affects the speed at which the processes can allocate that memory. If RAM is available, then allocation happens fast. If RAM is not available, then some part of the RAM content must be written to disk so that the RAM can be used for the new allocation. Since the disk is much slower than RAM, that makes the allocation take longer. The key is this only affect speed not how much allocation is required to hit the out of memory error. Just because the task manager still says LabVIEW is using a bunch of memory doesn't mean that LabVIEW didn't free your data when your VI stopped running. LabVIEW uses a suballocator for a lot of its memory. This means we allocate large blocks from the operating system, then hand those out in our code as smaller blocks. The tracking of those smaller blocks is not visible to the operating system. Even if we know that all those small blocks are free and available for reuse, the operating system still reports a number based on the large allocations. This is why even though the task manager memory usage is high after the first run of the VI, the second run can still run about the same number of iterations without the task manager memory usage changing much. Since the amount of memory LabVIEW can allocate is based on its address space (not physical memory), why can't it always allocate up to the 4GB address space of a 32-bit pointer? This is because Windows puts further limitations on the address space. Normally Windows keeps the top half of the address space for itself. This is partially to increase compatibility because a lot of applications treat pointers as signed integers and the integer being negative causes problems. In addition to that the EXE and any DLLs loaded use space in the address space. For LabVIEW this typically means that about 1.7 GB is all the address space we can hope to use. If you have a special option turned on in Windows and the application has a flag set to say they can handle it, Windows allows processes access to 3GB of address space instead of only 2 so you can go a little higher. Running one of these applications on 64-bit Windows allows closer to the entire 4GB address space because Windows puts itself above that address. And then of course running 64-bit LabVIEW on a 64-bit OS gives way more address space. This is the scenario where physical RAM becomes a factor again because the address space is so much larger than physical RAM and performance becomes the limiting factor rather than actually running out of memory. The last concept I'll mention is fragmentation. This relates to the issue of contiguous memory. You may have a lot of free address space but if it is in a bunch of small pieces, then you are not going to be able to make any large allocations. The sample you showed is pretty much a worst case for fragmentation. As the queue gets more and more elements, we keep allocating larger and larger buffers. But between each of these allocations you are allocating a bunch of small arrays. This means that the address space used for the smaller queue buffers is mixed with the array allocations and there aren't contiguous regions large enough to allocate the larger buffers. Also keep in mind that each time this happens we have to allocate the larger buffer while still holding the last buffer so the data can be copied to the new allocation. This means that we run out of gaps in the address space large enough to hold the queue buffer well before we have actually allocated all the address space for LabVIEW. For your application what this really means is that if you really expect to be able to let the queue get this big and recover, you need to change something. If you think you should be able to have a 200 million element backlog and still recover, then you could allocate the queue to 200 million elements from the start. This avoids the dynamically growing allocations greatly reducing fragmentation and will almost certainly mean you can handle a bigger backlog. The downside is this sets a hard limit on your backlog and could have adverse affects on the amount of address space available to other parts or your program. You could switch to 64-bit LabVIEW on 64-bit Windows. This will pretty much eliminate the address space limits. However, this means that when you get really backed up you may start hitting virtual memory slowdowns so it is even harder to catch up. You can focus on reducing the reasons that cause you to create these large backlogs in the first place. Is it being caused by some synchronous operation that could be made asynchronous?
  16. Static VI references load statically but allow you to call dynamically. For many cases this is acceptable. If dynamic loading is important but you still don't want to deal with paths or VI names on your diagrams, there is actually one more option that delivers dynamic loading with static calling. On subVI calls there is a "Call Setup..." menu item. This allows you switch the call to "Reload for each call" or "Load and retain on first call". Either of these will avoid loading the VI initially, but still take care of managing the linkage to the subVI for you. Keep in mind that any call to a subVI can cause it to be loaded, so if you want to take advantage of this, or any other dynamic loading scheme, you must be sure that all your references use dynamic loading. The one case we can't manage the subVI linkage for dynamic load with dynamic calling. This still requires your diagram to compute the path to the subVI and it will not be known to the application builder or noticed at edit time if it is incorrect.
  17. Wrapping the register for event in a safe class is not an issue. You wrap it in a class and create a method that wraps the register for events node. The only issue with preventing access to the user event refnum is that the event handler frame has access to the refnum. Seems like the feature there should really be the ability to specify at event creation that the handler should not have access to the refnum. This avoids the Registration-Only user event and means your wrapper is free to expose whatever subset of user event functionality is appropriate for your use case (register-only, generate-only, register and generate). The DVR is a harder problem. The only way to effectively wrap that currently would be to make your safe class have a method that takes a strict VI ref and calls it inside the structure. This would be more acceptable if LabVIEW had some sort of closure/anonymous VI. Giving this the DVR syntax would require a class to define border node behavior for the inplace element structure.
  18. The typecast node will never use the input buffer for the output. In fact not only will it copy the entire buffer, on Intel processors it will visit each element to put it into big endian format then visit each element to put it back to little endian. If an API requires users to typecast significant amounts of data, I would consider that a deficiency in the API that should be redesigned. For cases where the array elements are the same size, it would be possible to write a DLL that took both types and swapped the array handles. Be sure to configure both parameters as pointer to array handle then swap the handles.
  19. Before someone times this and realizes it is not true. Although we have some internal support that could someday allow reverse string to be constant time, it is not. Reverse string will actually swap all the characters in the string. Reverse array on the other hand is a constant time operation.
  20. There are other sweeping statements that are sometimes heard about property nodes that are not completely true (or sufficiently accurate). The main problem with all these statements is that it is largely not the node that defines the behavior but the reference type the node is being used on. If you look at the "Select Class..." context menu on the property node, its first level shows the major categories of classes. "Property nodes always run in the UI thread" -- false "Property nodes for controls always run in the UI thread" -- true Anything under the "VI Server" category will always run in the UI thread. This includes VI references, Application references and all panel/diagram object references. If you are making a lot of these calls, you might consider moving them to a subVI that can be set to run in the UI thread. Other categories, like VISA, will use any thread. ActiveX has its own rules for which objects can be accessed from which threads. (If you don't have to know about apartments, trust me you don't want to.) "Any control reference causes the panel of the VI containing the control to be loaded" -- true "Any control reference will cause the panel of the VI containing the control to be included in built applications" -- false This is related to the earlier comment about the documentation "Loads the front panel into memory" characteristic. This means that when the operation runs it forces the panel to be loaded. This is different than things that cause the panel to be loaded immediately when the VI is loaded, which is different than things that cause the application builder to include the panel by default. LabVIEW cannot give you a reference to any part of a panel without loading the entire panel, but there are multiple ways of obtaining control references. Reading the Panel property from a VI reference allows you to get to control references and causes the panel to be dynamically loaded if it is not already in memory. If the panel is not available, then an error is produced at run time. This is different than getting a control reference from a control reference constant. These are detected by LabVIEW causing it to load the panel as part of loading the VI. This means the VI is never in memory without its panel. This is also the case for implicitly linked property nodes (ones tied statically to a control and with their reference input hidden). The application builder uses the presence of control reference constants or implicitly linked property nodes as a sign that the panel must be included. However there are also other things that it looks at. Most of the Window Appearance settings for a VI being changed from the default will also tell the application builder to include the panel. The logic is that if you customized the look of the VI window, it is probably because you plan to show that window to the user. So instead of creating a implicit property node or control reference constant to get a panel included in a build, you could hide the panels scroll bars.
  21. The solution is to use the shift register. The reason is because the language makes no connection between the left tunnel and the right tunnel. What if the reference goes through a node? We don't know if the node would normally pass through or not. To avoid ambiguity we always produce default default for non-indexing tunnels when run zero iterations.
  22. LabVIEW doesn't know how long any specific CLN will take to execute, so we assume that they will all take long enough to be worth scheduling in parallel with other code (clumping separately). This increases the chance of multiple CLNs not running in the same thread. However, there is a solution. Subroutine VIs come to the rescue. Subroutines always generate a single clump and will execute all their code in a single thread. So as long as your socket call and the getlasterror are made under the same subroutine, they will execute in the same thread. This even covers the case where the getlasterror is encapsulated in other subroutine VI for reuse. There is one extreme corner case where other code could be run between the 2 calls. It involves the subroutine starting cooperatively in a thread under a call to a LV built DLL. The vast majority of applications don't even create this situation and it is pretty unlikely to happen even in those that do.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.