Jump to content

Albert-Jan Brouwer

Members
  • Posts

    29
  • Joined

  • Last visited

    Never

Everything posted by Albert-Jan Brouwer

  1. They're not comparable. The idea of scripting is to simplify programming by making functionality available for calling from a relatively easy and flexible interpreted language (a scripting language). LuaVIEW allows Lua to call LabVIEW: it is for scripting LabVIEW code. LabPython allows LabVIEW to call Python, which is the reverse of scripting: calling something interpreted from compiled code. The C code making up the Lua language and most of its C-based interface to LabVIEW must reside in a CIN or DLL for it to run as part of the LabVIEW runtime. No way of getting around that other than a separate runtime, with all the resulting integration issues. Only NI could do better by statically linking a scripting language into the LabVIEW runtime. Indeed, I wonder why NI has not done so: LabVIEW has been rather stagnant, as a language (I'm not talking the libraries, drivers, or development environment here), so adding a scripting language would be a quick means of getting with the times.
  2. But everything seems to be going up of late. Most stocks, real estate, commodities, gold and silver. So, if everything is going up, maybe the dollar is going down? But no, that cannot be. The US government reports a moderate CPI figure. And they would never lie about that, now would they? They've been such conscientious purveyors of truth.
  3. Irritating behaviour indeed. But there is a somewhat laborious workaround: after making changes to the enum typedef, select the enum item with the widest text string in the strict typedef control, then select "apply changes". This will cause the array widths of the various connected diagram constants to scale to a width sufficient for the widest item. That way, no enum item text gets hidden. Note that this requires all VIs that reference the typedef for an array diagram constant to be loaded in memory when applying the changes. On applying the changes, these VIs will be marked as having been modified. You'll either have to save them individually, or select "save all" in order to persist these diagram modifications.
  4. What is up with this rash of newspeak of late? "Creative Action Request". Just call a bug a bug. Here's some more: Consent to an unconstitutional reduction in civil liberties == "Patriotic" Opposed to government policy == "Unamerican" Kidnap and torture == "Rendition"
  5. That'd be an even better solution: no rounding errors (not even small ones) can accumulate that way. Not that I know of. But really, layout management should be built into LabVIEW. Modern widget toolkits all handle that task for the programmer, and often allow selection of the layout management policy. See for example here .
  6. Though having such functionality can indeed be useful, I don't see why it would necessarily have to break the existing semantics. Why not add an additional comparison mode to the equals operator like we already have for arrays: you can select whether or not to compare aggregates. Preferably in a manner that is visually explicit. It is much better to clearly discriminate different types of equality than to leave the programmer guessing. If you think this is nit picking, think again. Something as common as comparing two clusters will probably not do what you think it does, if the clusters contain floating point numbers. Consider what happens when the clusters are identical with the numbers set to NaN. For such situations I'd like to have the option to test for binary equivalence instead of numerical equivalence. For references to the same non-reentrant VI, I think the new behaviour is reasonable since there is no operational way of discriminating between two of those. However, for references to reentrant VIs this is not at all true. Reentrant VIs can maintain unique internal state (e.g. in shift registers) per reference so that a call by reference with the same inputs yields different results depending on which instance you called. Such usage is not at all pathological: it is one of the very few ways in which you can avoid code duplication when programming in G.
  7. Well, I suppose if it were new functionality, it would be up to NI to choose the behaviour. Even so, it would be a rather unusual choice: in all OO languages, different instances of the same class are not equal. But this is not new functionality, under previous versions of LabVIEW you could also compare VI references, and the behaviour was different than under LabVIEW 8.0: instance equality instead of class equality. So the new behaviour breaks existing code, which is how I found out about it.
  8. Comparison of VI references has been broken going from LabVIEW 7.x to 8. See attachment. Download File:post-372-1130703059.vi
  9. Part of the problem here is that LabVIEW uses integer calculations when scaling. These round to the nearest pixel. I usually set a dominant control to "scale object with pane", and let the other controls move along with it. However. repeatedly resizing a front panel causes the controls and indicators to slowly creep in random-walk fashion as the rounding errors accumulate, particularly when resizing by small increments so that the nearest-pixel rounding error is large relative to the scaling movement. As you noticed, under LabVIEW 8 the scaling already happens while dragging the window corner or border, before releasing it. Thus, what used to be a single scaling event with typically a large size difference, and thus relatively small rounding errors, has been turned in many small scaling events, with lots of relatively large rounding errors. Thus the problem becomes visually detectable much sooner than before. Obviously, all this could have been avoided by using a (hidden/internal) floating-point representation for the control positions. That this has been plaguing LabVIEW for at least three releases, and has just been made much worse, boggles the mind.
  10. Hey, as a physicist I find that rather offense. Yes, maybe there often are physics students that can hardly program and then start to use LabVIEW, with predictible results. But there is an opposite dynamic: phycisists quite frequently have extensive programming experience, e.g. with numerical codes, and end up being more or less obliged to use LabVIEW to automate some experiment, on account of LabVIEW's superior support for instrumentation. In other fields there isn't quite as much insentive for experienced programmers to switch to LabVIEW because G isn't that special as a language. The scheduling is a little odd, but the syntactic elements are nearly the same as for, say, C or Fortran.
  11. Garbage collection (GC) is more advanced than that: when a language runtime supports GC, it tracks whether or not objects remain referenced and frees the memory of no-longer-referenced objects at runtime. What LabVIEW does is merely resource tracking: put the allocated stuff in a list and free it by walking the list when the top-level VI stops. Unfortunately, stopping the VI that opened a reference does not necessarily imply that that reference is no longer in use: the reference may have been passed to a different still-running VI hierarchy. This makes that you have to be very careful when passing references around in your application. Make sure that such references are opened in an execution context that remains alive for the duration of use of those references. Ironically, this tracking feature was probably added with the intent to make the life of the programmer easier. It does, for simple applications with only a single running top-level VI. At the same time, it serverely hampers LabVIEW's utility for large-scale programming. Of course, GC should have been used for tracking references to avoid the aforementioned problem. GC algorithms have been around for ages.
  12. For an example of a VI that does that in a convenient manner see the LuaVIEW toolkit. When you extract the archive, the VI can be found at library/persistence/Libary Persist Position and Controls.vi. The way to use it is to call it with the "load" enum wired to the action input when you start your user interface VI, and call it with the "save" enum wired to the action input when you stop your user interface VI. Give all controls whose value should be persisted a description that starts with the string "(persistent)". No further programming required. The VI works by locating these controls via VI server and loading/saving their values from/to a file in the default data directory. Note that the position and size of the window is also persisted. Feel free to copy/customise the VI: it uses only a handful of subVIs and is small peripheral part of the toolkit. I have seen variations on the same theme floating around, so you may wish to shop around. Albert-Jan
  13. You want four instances that run asynchronously? One option is to create a reentrant subVI and have four while loops, each containing a call to the reentrant VI. Because it is reentrant, each diagram reference will have its own dataspace and can be called at the same time as any other diagram reference. This is sort of like having four instances. Another option is to place a static VI reference to a reentrant VI, wire it to a VI property node that extracts the VI name, wire that name into "Open VI reference" with the "prepare for reentrant run" option mask bit set (mask value == 8) and call the run VI method on the resulting VI instance reference. To pass instantiation parameters you can call the "Set Control Value" method on the VI instance before running it. Neither of the above works when the front panel of the instance must be visible/usable. For that, you must instantiate a template (a VI with a .vit instead of .vi extension). Templates may not be loaded in memory on instantiation, so you will have to pass an appropriate path to the "Open VI Reference" call (and call "run VI" on the resulting reference).
  14. What you can do is take a VI refnum control that holds the connector pane layout, and save it as a strict typedef. Then, every time you open a VI reference to a VI with that layout you can wire a diagram constant that is linked to that typedef. When the pane changes, you will only need to update that one typedef. If the pane itself contains typedeffed inputs or outputs, some caution is in order: there is some weirdness to LabVIEW's handling of nested typedefs. I think it is because the changes are not propagated recursively. The best way I've found to avoid breakage is to hold all dependent VIs in memory and selecting "save all" a few times in succession (until nothing more is saved). Albert-Jan
  15. But that documentation is incomplete. The type descriptors of some types are not documented beyond the typecode. Even when you're not going to decode or process the type that can be a problem: to get at the name of a cluster element you need to know the precise length of the type data (which for some types is of variable length) since the name string is placed beyond it. Indeed, the fact that the names of cluster elements are placed there is not even documented. Though it is possible to skip beyond a type, there is no unambiguous way to work backwards from there to retrieve the preceding Pascal string of the cluster element name. Yes, I concede that it possible to dynamically generate and parse flattened data. I was just trying to question the practicality for general purpose use. You have to look hard to find use cases where doing so makes sense. There is not much type-agnostic code that can be written in LabVIEW, it is a strictly typed language. And the number of use cases for type-driven code is limited. The only type-driven code I ever did in LabVIEW served to automatically convert a cluster into an SQL insert statement based on the cluster's type (that was based on type parsing code written by Rolf Kalbermatter). In retrospect, simply constructing some string formatting statements for the various record types would have saved a lot of time. Contrast that with dynamically typed languages. These bundle type information with data, so none of this packaging and unpackaging to pass variable type data around. And a means of checking a type at run time is built in so writing type-driven code is trivial. Also, the composition of compound data does not change the type. For example, a Python list can hold elements of multiple types. This means that there is lots more scope for writing type-agnostic code. Albert-Jan
  16. Indeed, but in both cases there is a specific type associated with the data. In the first case it is the wire that carries the type. In the second case it is the control. I agree that the diagram code that writes to the queue need not necessarily concern itself with the type. But setting the value of the data, which is what is usually meant by "production", always requires the type. Well..., OK, not always. There a rather sick way to write polymorphic production code: programatically generate both an I16 type definition array and a corresponding flattened string. Unfortunately, the format of the type definition array is complex and incompletely documented. Recursively parsing the type definition array and flattened data for polymorphic consumption code is much harder still. Not impossible though: LuaVIEW does so when converting arbitrary compound LabVIEW data into Lua tables. Albert-Jan
  17. Yes, no. Though LabVIEW is (as yet) unbeaten at talking to instrumentation, there are a number of issues surrounding the management of instruments that can benefit from scripting. In brief, the suggestion is to do low-level operations from a LabVIEW-implemented module VI, and high-level actions from scripts. Standard scripting strategy, really. So what issues are these? 1. Test sequencing Typically, during a test or experiment, you want to step the physical parameters through a series of states. The dataflow nature of LabVIEW necessitates extra work to sequence steps and control their execution. Instead, test sequencing is a good fit for scripts since script execution is step-by-step and is provided with control over execution (pause, resume, stop). Modules allow you to expose LabVIEW-implemented instrument actions to test-sequence scripts. 2. Synchronous operations When performing a step in a test sequence, the corresponding instrumentation operation often has to be known to have completed before the next step can commence. A synchronous interface as provided by callable module functions makes this easy and explicit. It is difficult to create a modular instrumentation architecture with LabVIEW that exposes a synchronous interface for test sequencing. For this reason, LabVIEW-based instrumentation architectures typically rely on message queues, which are asynchronous. An asynchronous architecture has a potential performance advantage but is much harder to reason about, is difficult to debug, and is not conductive to error handling. 3. Error handling Talking to instrumentation is error prone. The exception mechanism of Lua allows you to disentangle the error handling. Errors messages include line numbers, so it's easy to figure out at what step of a test sequence specified via a script things went wrong. 4. Modularity With instrumentation there is particular need for software modularity. When you change or add instruments, you don't want to rearchitect your application. When connecting multiple instruments of the same type, you want to instantiate from a single implementation. When using different instruments with similar functionality, you want to isolate them behind an abstract interface so that you do not need to change your test scripts. It is useful to be able to dynamically load and reload the instrumentation software. Modules make all that easy. 5. Configuration Doing the configuration from a script saves the bother of creating a graphical user interface: a text editor can be used instead. Most instrument configuration settings change infrequently if at all. For those, the time invested in creating a graphical configuration interface cannot be recouped. Of course, anything that needs very frequent adjustment should be given a LabVIEW GUI. Since configuration scripts are little programs, they allow for conditional configuration. E.g. configure the system differently depending on whether it is running in test or operational mode, or as a built application instead of under the development system. Batch operations are easy too, e.g. iterate over a table to a configure multiple channels. Also, with configuration scripts there is no ambiguity as to when the changes are effected: this happens when the script is next run. And scripts are pretty compact so you can log them for future reference. In summary, the point of using a module is to wrap the LabVIEW instrumentation in a manageable format that exposes high-level functionality to the Lua side for scripting. Note that this means that the finicky details of handling the instrument should remain hidden and be dealt with solely on the LabVIEW side. Also, it may be sensible to have one module wrap multiple instruments when the operations of those instruments are, from a high-level point of view, inextricably linked. For example, if you have a digital multimeter fronted by a switch, a DMM reading will always be preceded by switching to the proper channel. Albert-Jan
  18. Using variants or flattened strings you can make the transport type-agnostic, but, as has been pointed out, you always need the type definition on the production and consumption end of your queue: G is a statically typed language. One thing that is easy to overlook is that you can go a little further using variants: you cannot change the elements of a cluster at run time, but you can add/remove named attributes to/from variants. See the variant palette. The attribute lookup algorithm scales better than O(n) with the number of attributes. Likely it is O(log(n)) though the scaling is so weak that it might even be O(1), with the degradation being due to cache misses. So it is suitable for dictionary-alike purposes as well. Albert-Jan
  19. Jeff, Since, since you're looking into a flexible instrumentation control architecture, and are considering Lua for scripting, let me point out LuaVIEW modules. They work well for wrapping instrumentation drivers such that the corresponding instruments can be configured, managed, and operated from Lua. The implementation is simply a LabVIEW VI with a bunch of cases that correspond to the various instrument functions to be exposed. The functions of the instrument that must be callable from test or control scripts can be exported. Management of the instrument (configuration, initialization, periodic actions, cleanup) can be done from the script that opens the module: all other functions of the module are available to it. Modules are dynamically loaded and, with a little care, can even be reinitialized at runtime without breaking ongoing calls to export functions. When you have multiple instruments of the same type, use a template VI so that you can open multiple module instances. On opening, a configuration table can be passed that maps to some arbitrary LabVIEW cluster. To track the configuration, have the script that opens and configures the module log itself. A task script can retrieve itself using the task.script() function. Albert-Jan
  20. I've used Eclipse for a small amount of Java work, and it sure has an amazingly broad scope, considering that it is free. Project management, refactoring tools, version control integration, unit testing, and lots and lots of optional plugins. And the C/C++ tooling has come along as well. I'm afraid so. At best we could write some kind of TCP/IP server that runs under the LabVIEW runtime and does things like open/close/pop-up/load VIs/run VIs and is messaged from an Eclipse plug-in. But the level of integration would be shallow, probably hardly worth it. The problem is that LabVIEW thoroughly mixes up language, tools, and libraries. That's mostly by necessity. Being graphical, a lot of infrastructure is required just to work with the language (G: while, for, case, sequence, etc.). NI however would be able to do much better. Ideally, they'd target Java bytecode with the G compiler. That way G code could run inside the same Java runtime as Eclipse. And it would make G much more portable. A set of LabVIEW widgets that uses SWT would be highly desirable as well. That maybe requires a bit of explanation. In the old days, Java only had AWT, the advanced windowing toolkit, which was anything but advanced. In response to the need for building more advanced GUIs, Sun architected Swing. Unfortunately, Swing is way over-architected and does not map onto platform-native widgets, which makes the GUIs look weird. As do LabVIEW GUIs, really. The Eclipse developers instead implemented SWT (standard widget toolkit) whose primitive widgets map onto platform-native widgets. That's why Eclipse looks slick and is responsive (once it has loaded). Of course, there is a lot of politics and strategy working away in the background. Sun is violently opposed to SWT, and IBM is a little loathe of pushing it too publicly because that might be considered trying to fragment the Java platform. What doesn't help either is the very name "Eclipse": it can be considered a dig at Sun, though the Eclipse people deny that that was the intent (yeah, right). Why has IBM given away arguably the most advanced development environment (much more advanced than the LabVIEW IDE anyway)? My guess is that they consider it pouring fertilizer onto the open source and particularly Java community, with the intent of harvesting some of the resulting crop. Also, IBM is trying to work its way up the food chain, and is intent on commoditizing the ingredients that go into their services and advanced products. Particularly those supplied by Microsoft (Eclipse is the main competitor to Visual Studio) and Intel, the two companies whose virtual monopolies were unintentionally spawned by IBM fumbling the ball back in the early days of the PC. For instance, IBM recently sold their PC division. It is a low margin business. It will also allow them to work more closely with AMD and maybe even fab some AMD processors without AMD's customers having to fear that by using AMD instead of Intel processors they will subsidize a competitor. Interesting times.
  21. Though that specific example is not included, there is luaview/examples/Do a script.vi which does some simple calls of LabVIEW-side functions, including dialog.one_button(). The two-button dialog binding is in fact also included. See luaview/functions/pop-up dialogs/dialog.two_button_lua.vi. Your experience does however suggest the need for adding some step-by-step binding instructions to the "Getting Started" section of the manual. There is no built-in activation/licensing mechanism so we (CIT) do not have a means of actively tracking its use. As to its adoption, or lack thereof, it seems there is no barrier for people suitably primed: two of the early adopters I know of had created a LabVIEW-implemented scripting language before deciding to switch to Lua. It is my impression though that most of the LabVIEW community is not (yet) familiar with scripting. If there are objections, it is easy enough to set up a mailing list.
  22. The examples were meant to briefly demonstrate LuaVIEW-specific features. They're pretty useless as an introduction to the language. The book ( http://www.lua.org/pil/ ) on the other hand is great for that. The next LuaVIEW version will likely include a somewhat more extensive example that shows how to set up a server-side application glued together with Lua scripts. There is none as yet. There is however a Lua mailing list. From the LuaVIEW FAQ: The "LabVIEW scripting feature", which is still under development by NI, exposes the object embedding hierarchy of the LabVIEW development system, diagrams in particular, as property and invoke nodes. This allows LabVIEW to wire LabVIEW: the "scripting feature" is concerned with code generation, not scripting.
  23. Calling LabVIEW is the main problem with integrating a scripting language into LabVIEW. Since scripts tend to be used on the "outside" instead of the "inside" of an application there is little use for a scripting language that cannot call LabVIEW. LabVIEW is rather odd as compared to other languages. VI calls are not made by passing parameters, results, and a return address on the C stack. Consequently the standard calling mechanisms do not work as-is and some kind of adapter is required. This is why Lua was chosen as the scripting language for LabVIEW: it has sufficient hooks to allow its C calling to be captured and adapted. This allows for a mechanism that enables calls of LabVIEW code while running inside the LabVIEW process. Don't hold your breath for Python to be similarly co-opted. Its embedding and extension C API is rather more complex. For some further details of this issue see this page: http://www.citengineering.com/luaview/calling_labview.html
  24. An update of cubic spline fitting code previously released for LabVIEW 4.0 and 5.1. It has been cleaned up slightly and is compiled for LabVIEW >= 7.0. A spline fit is particularly useful for obtaining a representation of noisy x,y data without a model function. The resultant spline is an analytical object that can be interpolated at arbitrary positions. Download File:post-372-1103116554.zip
  25. Yes. Particulary when the VI that threw the error has a large diagram, e.g. a state machine with many states or a driver with many action cases: with only a call chain you'll still be searching around. In such a case one should really append the state or action enum to the source string of any errors that occur. Ideally errors should explain to the user what went wrong, where the error occurred, and what the consequences of the error were (e.g. the halting of subsystems). Where possible, suggesting a means of resolving the error is also useful. Of course, all that is a lot of work. But then, time invested in error handling tends to pay for itself later in terms of easier testing and deployment, and reduced support costs. You might add:4. Code limits the error effect scope by aborting only those subsystems dependent on the operation that failed. Unfortunately, going from 2 to 3 or from 3 to 4 involves restructuring your program which makes the gradual extension of the error handling costly. When starting on a large application it is probably cheaper to opt for full-featured handling right away. I downloaded the archive but it won't unpack. Might the file be corrupted?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.