Jump to content

Albert-Jan Brouwer

  • Posts

  • Joined

  • Last visited


Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Albert-Jan Brouwer's Achievements


Newbie (1/14)



  1. They're not comparable. The idea of scripting is to simplify programming by making functionality available for calling from a relatively easy and flexible interpreted language (a scripting language). LuaVIEW allows Lua to call LabVIEW: it is for scripting LabVIEW code. LabPython allows LabVIEW to call Python, which is the reverse of scripting: calling something interpreted from compiled code. The C code making up the Lua language and most of its C-based interface to LabVIEW must reside in a CIN or DLL for it to run as part of the LabVIEW runtime. No way of getting around that other than a separate runtime, with all the resulting integration issues. Only NI could do better by statically linking a scripting language into the LabVIEW runtime. Indeed, I wonder why NI has not done so: LabVIEW has been rather stagnant, as a language (I'm not talking the libraries, drivers, or development environment here), so adding a scripting language would be a quick means of getting with the times.
  2. But everything seems to be going up of late. Most stocks, real estate, commodities, gold and silver. So, if everything is going up, maybe the dollar is going down? But no, that cannot be. The US government reports a moderate CPI figure. And they would never lie about that, now would they? They've been such conscientious purveyors of truth.
  3. Irritating behaviour indeed. But there is a somewhat laborious workaround: after making changes to the enum typedef, select the enum item with the widest text string in the strict typedef control, then select "apply changes". This will cause the array widths of the various connected diagram constants to scale to a width sufficient for the widest item. That way, no enum item text gets hidden. Note that this requires all VIs that reference the typedef for an array diagram constant to be loaded in memory when applying the changes. On applying the changes, these VIs will be marked as having been modified. You'll either have to save them individually, or select "save all" in order to persist these diagram modifications.
  4. What is up with this rash of newspeak of late? "Creative Action Request". Just call a bug a bug. Here's some more: Consent to an unconstitutional reduction in civil liberties == "Patriotic" Opposed to government policy == "Unamerican" Kidnap and torture == "Rendition"
  5. That'd be an even better solution: no rounding errors (not even small ones) can accumulate that way. Not that I know of. But really, layout management should be built into LabVIEW. Modern widget toolkits all handle that task for the programmer, and often allow selection of the layout management policy. See for example here .
  6. Though having such functionality can indeed be useful, I don't see why it would necessarily have to break the existing semantics. Why not add an additional comparison mode to the equals operator like we already have for arrays: you can select whether or not to compare aggregates. Preferably in a manner that is visually explicit. It is much better to clearly discriminate different types of equality than to leave the programmer guessing. If you think this is nit picking, think again. Something as common as comparing two clusters will probably not do what you think it does, if the clusters contain floating point numbers. Consider what happens when the clusters are identical with the numbers set to NaN. For such situations I'd like to have the option to test for binary equivalence instead of numerical equivalence. For references to the same non-reentrant VI, I think the new behaviour is reasonable since there is no operational way of discriminating between two of those. However, for references to reentrant VIs this is not at all true. Reentrant VIs can maintain unique internal state (e.g. in shift registers) per reference so that a call by reference with the same inputs yields different results depending on which instance you called. Such usage is not at all pathological: it is one of the very few ways in which you can avoid code duplication when programming in G.
  7. Well, I suppose if it were new functionality, it would be up to NI to choose the behaviour. Even so, it would be a rather unusual choice: in all OO languages, different instances of the same class are not equal. But this is not new functionality, under previous versions of LabVIEW you could also compare VI references, and the behaviour was different than under LabVIEW 8.0: instance equality instead of class equality. So the new behaviour breaks existing code, which is how I found out about it.
  8. Comparison of VI references has been broken going from LabVIEW 7.x to 8. See attachment. Download File:post-372-1130703059.vi
  9. Part of the problem here is that LabVIEW uses integer calculations when scaling. These round to the nearest pixel. I usually set a dominant control to "scale object with pane", and let the other controls move along with it. However. repeatedly resizing a front panel causes the controls and indicators to slowly creep in random-walk fashion as the rounding errors accumulate, particularly when resizing by small increments so that the nearest-pixel rounding error is large relative to the scaling movement. As you noticed, under LabVIEW 8 the scaling already happens while dragging the window corner or border, before releasing it. Thus, what used to be a single scaling event with typically a large size difference, and thus relatively small rounding errors, has been turned in many small scaling events, with lots of relatively large rounding errors. Thus the problem becomes visually detectable much sooner than before. Obviously, all this could have been avoided by using a (hidden/internal) floating-point representation for the control positions. That this has been plaguing LabVIEW for at least three releases, and has just been made much worse, boggles the mind.
  10. Hey, as a physicist I find that rather offense. Yes, maybe there often are physics students that can hardly program and then start to use LabVIEW, with predictible results. But there is an opposite dynamic: phycisists quite frequently have extensive programming experience, e.g. with numerical codes, and end up being more or less obliged to use LabVIEW to automate some experiment, on account of LabVIEW's superior support for instrumentation. In other fields there isn't quite as much insentive for experienced programmers to switch to LabVIEW because G isn't that special as a language. The scheduling is a little odd, but the syntactic elements are nearly the same as for, say, C or Fortran.
  11. Garbage collection (GC) is more advanced than that: when a language runtime supports GC, it tracks whether or not objects remain referenced and frees the memory of no-longer-referenced objects at runtime. What LabVIEW does is merely resource tracking: put the allocated stuff in a list and free it by walking the list when the top-level VI stops. Unfortunately, stopping the VI that opened a reference does not necessarily imply that that reference is no longer in use: the reference may have been passed to a different still-running VI hierarchy. This makes that you have to be very careful when passing references around in your application. Make sure that such references are opened in an execution context that remains alive for the duration of use of those references. Ironically, this tracking feature was probably added with the intent to make the life of the programmer easier. It does, for simple applications with only a single running top-level VI. At the same time, it serverely hampers LabVIEW's utility for large-scale programming. Of course, GC should have been used for tracking references to avoid the aforementioned problem. GC algorithms have been around for ages.
  12. For an example of a VI that does that in a convenient manner see the LuaVIEW toolkit. When you extract the archive, the VI can be found at library/persistence/Libary Persist Position and Controls.vi. The way to use it is to call it with the "load" enum wired to the action input when you start your user interface VI, and call it with the "save" enum wired to the action input when you stop your user interface VI. Give all controls whose value should be persisted a description that starts with the string "(persistent)". No further programming required. The VI works by locating these controls via VI server and loading/saving their values from/to a file in the default data directory. Note that the position and size of the window is also persisted. Feel free to copy/customise the VI: it uses only a handful of subVIs and is small peripheral part of the toolkit. I have seen variations on the same theme floating around, so you may wish to shop around. Albert-Jan
  13. You want four instances that run asynchronously? One option is to create a reentrant subVI and have four while loops, each containing a call to the reentrant VI. Because it is reentrant, each diagram reference will have its own dataspace and can be called at the same time as any other diagram reference. This is sort of like having four instances. Another option is to place a static VI reference to a reentrant VI, wire it to a VI property node that extracts the VI name, wire that name into "Open VI reference" with the "prepare for reentrant run" option mask bit set (mask value == 8) and call the run VI method on the resulting VI instance reference. To pass instantiation parameters you can call the "Set Control Value" method on the VI instance before running it. Neither of the above works when the front panel of the instance must be visible/usable. For that, you must instantiate a template (a VI with a .vit instead of .vi extension). Templates may not be loaded in memory on instantiation, so you will have to pass an appropriate path to the "Open VI Reference" call (and call "run VI" on the resulting reference).
  14. What you can do is take a VI refnum control that holds the connector pane layout, and save it as a strict typedef. Then, every time you open a VI reference to a VI with that layout you can wire a diagram constant that is linked to that typedef. When the pane changes, you will only need to update that one typedef. If the pane itself contains typedeffed inputs or outputs, some caution is in order: there is some weirdness to LabVIEW's handling of nested typedefs. I think it is because the changes are not propagated recursively. The best way I've found to avoid breakage is to hold all dependent VIs in memory and selecting "save all" a few times in succession (until nothing more is saved). Albert-Jan
  15. But that documentation is incomplete. The type descriptors of some types are not documented beyond the typecode. Even when you're not going to decode or process the type that can be a problem: to get at the name of a cluster element you need to know the precise length of the type data (which for some types is of variable length) since the name string is placed beyond it. Indeed, the fact that the names of cluster elements are placed there is not even documented. Though it is possible to skip beyond a type, there is no unambiguous way to work backwards from there to retrieve the preceding Pascal string of the cluster element name. Yes, I concede that it possible to dynamically generate and parse flattened data. I was just trying to question the practicality for general purpose use. You have to look hard to find use cases where doing so makes sense. There is not much type-agnostic code that can be written in LabVIEW, it is a strictly typed language. And the number of use cases for type-driven code is limited. The only type-driven code I ever did in LabVIEW served to automatically convert a cluster into an SQL insert statement based on the cluster's type (that was based on type parsing code written by Rolf Kalbermatter). In retrospect, simply constructing some string formatting statements for the various record types would have saved a lot of time. Contrast that with dynamically typed languages. These bundle type information with data, so none of this packaging and unpackaging to pass variable type data around. And a means of checking a type at run time is built in so writing type-driven code is trivial. Also, the composition of compound data does not change the type. For example, a Python list can hold elements of multiple types. This means that there is lots more scope for writing type-agnostic code. Albert-Jan
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.