Jump to content

GregR

NI
  • Posts

    47
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by GregR

  1. You will only get that label when the UI thread is not able to handle OS messages. This can happen through direct routes like a long CLN run in the UI thread. That is definitely the first thing to check. It can also happen through indirect routes. If you are doing something that is very disk or memory intensive, it can cause delays that slow down what would normally be very fast operations in the UI thread. If you are paging in a huge buffer in another thread and the UI happens to need some piece of memory paged in, then the UI in theory could be blocked long enough to cause the OS to consider the process hung. With the speed of most modern machines, this is unlikely but it is possible.
  2. Back to original problem. Fix #1 is to hide the graph's scrollbar. Keep in mind you are only giving the graph data for the visible range of the X scale. This means that the X scrollbar built into the graph will be useless. This is designed to allow the user to scroll through all the data that the graph has when it doesn't fit in the visible area. You will never be in that situation. If you want to let the user scroll through all the available data, you will need to implement your own scrolling. Fix #2 is to turn off "Ignore Time Stamp" on the graph. This option means no matter what the time stamp is on your data, the graph X scale will start at 0. This means the user will not be able to tell where they are in the data. It also means that when you scale is showing 5:00 to 10:00, your data is being shown at 0:00 to 5:00. With those changes your VI works pretty well except that you probably need to produce slightly more data. Currently there seems to be a zero value being produced in the visible area just before the scale maximum. I also noticed that your decimation doesn't work well when the minimum is negative. The scales move but the data is produced as if the minimum is zero. Also your decimation produces zero values when showing area beyond the end of the data. If you can't make it just produce fewer elements in these cases, producing NaNs is an option. The graph will not draw anything when it encounters NaN. As with the scrollbar, the graph palette option to fit to all data doesn't work. You might want to implement your own button to allow the user to zoom back out to all data. Another approach to some of these problems would be to add another plot to your graph that just has the first and last points of your waveform in it with the correct delta to put them in the right spots. Set this plot to have no line and a simple point style. This way the graph will still consider itself to have data across the entire range, but you only produce detailed data for the visible area. This doesn't solve the "ignore time stamp" problem but always the scrollbar and fit to all data to work.
  3. WebUI builder requires Silverlight. Remote panels require a browser plugin and a locally installed LV RTE. Neither of these technologies are available on Samsung smart TVs. Regardless of whether these meet your functionality requirements, that means neither is an option. These TVs are an HTML/JavaScript platform with limited Flash support, so those are the tools you have to choose from. Websockets are definitely an option, as is building your VIs into RESTful web services using LV. If you are finding the WebUI builder graphs to be primitive, you may run into similar issues with the html UI solutions available. You can most likely achieve the displays you want, but it may take more programming than you expect.
  4. Just to clarify for others that stumble across this discussion. You can open references to VIs built inside an EXE by path but this is only possible from VIs running as part of that EXE and the path will be different than during development. That path difference makes this error prone and a bad idea, but it is possible. In general the VI path is the EXE path with the VI filename added as another path segment at the end. However this has problems with class/library files that have the same filename. How LabVIEW resolves these conflicts also depends on the Advanced build setting for "Use LabVIEW 8.x file layout". If this option is on, then LabVIEW will put the files in directories next to the EXE. If the option is off, then LabVIEW creates virtual directory structure under the EXE. I won't try to fully explain this, but I will say that is repeatable between builds so you can figure out where the VI is being put and reference it from there. Most of the time there is a better approach but this is an option if you can't find another answer.
  5. LabVIEW will preallocate the array at the max size and truncate as you suspected.
  6. Protecting the password is important but the problem doesn't end there. Say my LabVIEW built application queries the OS to decide if the current user has some privilege at launch time. How should my code remember that fact? Do I put it in a LabVIEW global variable boolean called "IsAdmin"? Guess where my weak link is. Forget about attacking the password. If I can find the right byte, I can turn any user into an admin. Or even before that, what if I can attack the code the decodes the answer from the OS. Any application that runs on the users machine and internally makes decisions about allowable operations is susceptible to in memory attacks (through debuggers or code insertion). There are a few LabVIEW-specific things you can do to reduce this exposure. Subroutine priority - Subroutines are less exposed through VI server. Inline VIs - Inlining of security critical VIs means there is no longer a single bottleneck that the user can attack to gain access to multiple pieces of functionality. Each piece must be attacked separately. Request Deallocation node - This node causes all temporary allocations for a VI to be disposed at the end of a subVI's execution. This does not necessarily overwrite the memory but could help. (I'm not sure what happens if you try to use this in a VI that is set to inline.) If we apply these to the issue of remembered state, then you might create inline VIs that know how to get and set into some obscured form of remembered state. Of course that just moves your weak link to be the algorithm used to obscure your state.
  7. Personally I like the visual cues provided by beveled buttons and color, but that doesn’t seem to be the prevailing direction. The soon to be released Visual Studio not only removes button borders but also most color and any dividing lines between menus, toolbars and content. Then they put the menus in all caps. What do you think? Is this the direction LabVIEW should be moving?
  8. If the VI is preallocated reentrant, then you should be allocating and deallocating a clone each time even if the VI itself is staying in memory because of other references.
  9. It definitely could be an endian-ness problem, but your characterization of LabVIEW is not quite right. LabVIEW flattens to big endian but in memory (any typed data on a wire) it matches the endian-ness of the CPU. Since all our desktop platforms are now x86, they all run as little endian. So the problem would be that his data is big endian and LabVIEW is treating it as little endian. Don't mean to be pedantic but I don't want someone to come along later and convince themselves all LabVIEW data is big endian.
  10. But you defined your plugin interface in terms of an lvclass. You can't have a class in your interface unless both side are going to agree on the class definition. If you really want both side to not share any dependencies, then you can't have any dependencies in the interface between the sides. If your strict VI reference uses any class or typedef, then that definition must be shared. To operate the way you wanted, you can only use core data types in the connector pane of your plugins. If you have more than one plugin, then you need to make sure each plugin is built with a wrapping library or name prefixing to keep them from conflicting with each other. At that point whether the top level app does this or not is unimportant.
  11. You can't load a plugin that has a dependency that has the same name as one of the application's dependencies if the dependency is supposed to be different. In your case the dependency is a class that is passed from Caller.vi to Callee.vi so in fact it is critical that both sides do link to the same instance. This is the only way it will work. Whether that shared class is inside the EXE or not is a separate issue. It can be acceptable for this dependency to be inside the EXE and the dynamically loaded VI will work just fine. Your original project had 2 application builds: Caller and Namespaced Caller. Caller.exe works just fine for me (as long as I make sure Callee.vi is saved in the same version before running Caller.exe). The point of this exercise is not to avoid sharing but to figure out how to properly share. The suggestions I gave were all different ways of sharing those dependencies. They all have tradeoffs between robustness and development overhead. Take your pick. The real key is to make sure you understand which items are being shared. Any change to a shared item could cause you to have to rebuild both sides.
  12. There should be no problem running a 32-bit built application on 64-bit Windows. If you have to support 32-bit Windows, then it probably makes sense to only build as 32-bit rather than having to build everything twice.
  13. It took me a little while to understand that "namespaced" meant using the build option "Apply prefix to all contained items" in "Source File Settings" for dependencies in the build spec. Once I made that connection everything makes sense. This isn't really namespacing. This is changing the name of every dependency. So after the build Caller.vi references a class named "namespace.Shared Class.lvclass". Since Callee.vi didn't go through the build, it references the class "Shared Class.lvclass". A VI that has a single input of type "Shared Class.lvclass" is not going to match a strict VI reference with a single input of type "namespace.Shared Class.lvclass" so the open of the reference fails. If you are going to rename the class during the build, then all plugins must be rebuilt against the renamed version. You can't really do that if the renamed version is only present inside the application. There are a few other options though. You could include your plugins in the application build as "Always Included" files. This doesn't create much of a plugin framework, but in some cases that is acceptable. You can make the application build put its dependencies outside the application. Create new destination for a sub directory, then change the dependencies to build to there. Then in your plugin projects (Callee's project) reference these files instead of the original "Shared Class" source. Then when Callee is loaded, it will agree on what all the dependencies are named. This is similar to the solution of creating the shared items as another build, but avoids actually having a separate build step. You can build the shared items as a separate distribution and have both "Caller" and "Callee" projects reference that build output. This doesn't have to be built as a PPL. It could just be a source distribution, but either way it requires a separate project and build increasing maintenance. You can not rename shared dependencies going into the application. For items that will be referenced from plugins, you can add them to the project directly (instead of having them just show up in dependencies) and not have them be prefixed. This also will make it clear which items are valid to be referenced from plugins. That makes it clear to you which items might break plugins if you modify them and easier to tell others writing plugins what things they can reference versus items they should not reference. Many other variations of these themes. To decide which approach to take, I'd want to know what you are really trying to get out of using a plugin approach. Who is going to be building the plugins? Are the plugins supposed to be able to update independently of the application? Should the application be able to update without invalidating the plugins? What VIs/classes used in the application should be able to be referenced from the plugins? What were you trying to accomplish with the name prefixing option?
  14. Officially we would encourage you to use the VI Server APIs to do things like this. In some cases we even expose methods on the Application class that can return information about VIs without loading them. The main reason for this stance is because we reserve the right to change our file formats between versions. This is usually either to support new features or to improve performance and has happened many times to various degrees. The last substantial change was to compress several pieces of the VI because CPUs could decompress faster than the larger data could be read from disk. I realize not publishing the format is annoying but it would also be annoying to publish changes to it every release. Try and think that the time it would have taken us to update the documentation is instead being applied to some assume feature.
  15. There is another possible answer. When we build the EXE and all the DLLs that make up LabVIEW, we do generate symbol files. We don't ship them but we hold on to them for debugging. Each executable remembers the absolute path that this symbol file was created at on our build machines. In many cases our build machines are setup with multiple drive letters rather than just one huge C drive. When NIER encounters a crash, it uses a MS DLL to look for the symbol files to try to put more details in its log. One of the places this DLL looks is the path in the executable. I think this behavior is frequently the result of accesses to drive letters that are not mapped or mapped to removable media. The exact behavior may depend on what the drive is. I know we have had cases where it would request you insert a disk into an optical drive. In the end there are no ill effects. This is just a matter of unfortunate error handling.
  16. LabVIEW's execution semantics create several kinds of differences to code from most textual languages. Our threading model, non-reentrant VI protection, data formats, value semantics and debugging all contribute to these differences. Trying to explain all of these would take a lot and still not necessarily help users debug issues. Slight correction: In versions up to 2011, we did put the dataspace pointer in a known register (although its EBP not EBX). Starting in the next release we allow LLVM to decide where to store this value though and it doesn't always put it in the same place. There is one piece of good news if what you're debugging is a crash. Starting in the next release our NIER reporting will start including the name of the VI for any crash that happens with a VI in the call stack. NIER is off by default but can be turned on with the "NIER=true" config token. You can then look at the log files and at least see which VI caused the problem. This wouldn't help in a non-crashing case like a memory leak and relies on thread-local storage so the information it uses can't be easily accessed from a native debugger. It's something though.
  17. This is just a guess, but if you are trying to figure out the width of the window border from your VI panel, you are in the wrong area. Controls.Border is a .NET control, just like a button or a slider. By calling its constructor you have created an instance of this control. The control's width defaults to NaN as you discovered. For .NET controls this means that they sized to their parent. Once the control has a parent it will compute its width which will then be available through the ActualWidth property. Your control has no parent which means it can't compute its width and also means it will never be drawn in any window.
  18. This isn't much information but you can fairly easily tell the difference between VI generated code and LV internal code. For code that was loaded by the system from a DLL or EXE, the debugger knows which file that came from even without symbols. In VisualStudio that means the call stack window will show something like a DLL name and then some offset into that DLL. The VI code that LabVIEW generates is not built into a DLL or loaded by the system, so it has nothing to associate it with. For this code the call stack window will just show an address. This won't tell you which VI the code comes from but it can tell you whether it is VI code. Once you decide you are in VI code, there is not much you can do. Typically one of the registers (EBX) has the pointer to the VI's "dataspace" but that data structure is undocumented and version specific.
  19. The most likely situation I can think of is that there are typedef updates happening during the load. The node knows it is looking for the name "ER". At some intermediate point in the load, the node sees a version of the cluster that has no field named "ER". It could decide to fall back to the index, which is now named "DN". After all the updates are complete, the cluster now has both the "ER" field and the "DN" field but it now thinks that "DN" is the right name so it stays there. This is mostly just conjecture, but it is along the lines of what I could imagine our implementation doing. Are all your VIs saved in a state that is consistent with all your typedefs?
  20. If your code is only going to run in the development system, then you could use scripting to dynamically generate a VI that sets the properties in the file being loaded. That would at least be interesting. If you have to resort to staticly creating code for each property, the property node scripting interface can return a list of all the supported properties for its current class. You could use that to generate a VI for each property or a case frame for each property in a VI for each control.
  21. In general reentrant VIs are faster to instantiate and equivalent performance in other ways. We instantiate template VIs by loading into another context which means your entire VI hierarchy is loaded into another context as each instance is created. If you're using 2011, you can also consider using "async call by ref" instead of the run method to start separate processes like this.
  22. The first question was facetious. The rest was serious. For non-selectable sub-objects (like terminals) we usually walk up its owners until we find an object that is selectable and highlight that object. The object you found may be one we don't do that on. If you can tell us what kind of object it is, we might be able to confirm that. It shouldn't be totally disconnected from the rest of your VI, because sanity checking actually detects these "orphaned" objects and removes them.
  23. What is this Heap Peek you speak of? If I had heard of such a thing, I would wonder what type of object you had selected. Some objects might not have a visual representation that could be highlighted. Some might not be visible. You may be able to look up the object's owners until you find something that can be highlighted.
  24. UDP does not guarantee ordering, non-duplication or even delivery of "chunks", so you would need to make sure each write contains enough information to completely identify which part of which file it is. That way you could reassemble them on the other side. Even then you may not get them all, so probably want a way to ask for some part of the file to be resent. These are things that TCP would take care of for you. Do you really need to use UDP?
  25. Async programming does not necessarily imply multithreading but it can help. Async just says that this method is something that someone can call await on. Await says I want to make my thread available to run other code until I'm told to continue, then run the rest of my code. Viewed from the context of a single thread, this is really just cooperative multitasking. It is the same thing LabVIEW's scheduler does for the UI thread. The key to turning this from cooperative multitasking into multithreading is that await can be called on things from other threads. The easiest way to use await for multithreading is with Task. Tasks run in the thread pool, you can await any task and you don't consume your thread while in the await. When writing UI code in .NET, most of your work is done in event handlers. The user presses a button and you do something. If the thing you do is a calculation that takes 5 seconds and you do it directly in the handler, your UI is frozen for those 5 seconds. If you make that async, it still freezes your UI for 5 seconds because it is still doing all that calculation in the UI thread. So you want to do the calculation in the thread pool. However UI objects can only be modified from the UI thread, so you can't update your display at the end of the calculation from the thread pool. What you need to do is start the calculation in the thread pool from your handler, then schedule the update back to the UI thread when the thread pool finishes the calculation. Writing this code is made simpler with async and await. You simply mark your handler as async, start the calculation using the Task class, await the task, then update your UI. The closest analogue in LabVIEW is the event structure. If you need to do something lengthy and you want other events to be handled while it runs, you have to somehow make that execution happen outside the event frame. There are various approaches to that with the latest being the async call by ref. In some respects this still isn't as smooth as what MS has done with async/await, but it is a little different for a dataflow language. Are there other presentations that you think would work well for async event handling in LabVIEW?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.