-
Posts
453 -
Joined
-
Last visited
-
Days Won
30
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Mads
-
TDMS is extremely flexible. You can dump anything you want into the files, at any time - without worrying about how to find it again. The downside in my case is the speed you get when you need to read the data. Even defragmentet (a must, if you need to write in small and varying segments, otherwise the performance gets really crappy), a custom binary format will be much much faster to read.
-
I see your point and agree that it would be better to have the file somewhere you can write to it. On the other hand; how often do you need to change LabVIEW-specific settings? The user can still do it manually if required, and/or you could edit the access rights so that the program has access to write to it. You can also store many of these in another configuration file and override the settings of the .ini file in your own code as soon as it executes. That's the case for the VI server port that you mention e.g. (It can be set programmatically using the VI Server Port property).
- 4 replies
-
- preference file
- ini file
-
(and 1 more)
Tagged with:
-
Do you really need to both specify a custom LabVIEW type .ini file for each run - *and* pass custom parameters to the application via the command line feature? Let me assume that the reason for this request is just that you have not explored all the options yet, and give you some pointers (if you know of all thsi already and it's not applicable, just ignore the "instructional tone" of the following text....;-) ) The built application will always read the INI-file and the LabVIEW Run-Time-Engine will automatically use the LabVIEW-keys it recognises in the section that has the same name as the executable. If you want to use the same file for additional parameters you can, but then you'll have to write code to reads the file and handles your custom keys yourself. Such use of the ini file is a nice solution for parameters that should be somewhat configurable, but which do not require to be changed frequently. For parameters you would want to change often you should make a user interface in the application that reads and writes to separate configuration files instead. Those files should be stored in the directories dedicated by the OS to application data (on Windows this would be the Appdata or ProgramData folders). The command line argument feature is a different way "in", with its own use case (I guess NI did not expect you to combine both LabVIEW specific arguments there, AND custom arguments, so they turn the former off if you activate the interface for the latter). There you can specify parameters more dynamically at the call of the application instead, something which is more practical to use for a limited set of parameters that you may (or may not) want to define at startup. Most people do not want to have to specify parameters in this fashion, but it can be good way to allow other applications or an administrator to individualize each session.
- 4 replies
-
- preference file
- ini file
-
(and 1 more)
Tagged with:
-
Programmatic Saves of LabVIEW Plots and Charts
Mads replied to DMC Engineering's topic in LabVIEW General
There is another interesting option here. The end goal of the user is often not to save the graph as an image, but to get the image into a document. Instead of forcing the user to save the image as a file first, you can add a "Copy As Image" option. The key to get this functionality is to generate the image and load it in a picture control in the background - and then use the picture control's Export Image method with the target set to Clipboard. The attached VI does the job for you once you have generated the picture. Image to Clipboard.vi -
Regarding ID: 3004519 - Use custom decimal sign for floats, I hope the automatic solution is chosen. Otherwise you will have to know which decimal sign has been used up front and that is not the case when configuration files are shared between computers with different decimal signs (useful in client/server applications e.g.) I can write code that will open the file and check prior to calling the OpenG function of course, but that's not very smooth. Doing it automatically might not work perfectly for ever case (let's say that someone has specified the decimal sign to be something else than period or comma, after all you are free to do so under Windows at least), but then those users can analyze and specify the decimal sig. The automatics will still be able to offer a better solution for 95% of the cases...
-
Clean old data into Multicolum List control
Mads replied to spaghetti_developer's topic in LabVIEW General
Writing an empty array to the item names property will clear it, are you sure you have indeed don so when you tested it (perhaps you had other code that refilled it with old data from a shift register just afterwards e.g.)? In general this should be an easier task in LabVIEW. I've suggested it on the idea exchange here,I've also made an RCF plug-in that will do it for you. -
Thanks for the tips guys. I had looked a bit on the MikroTik solutions already...but we'll see what we'll end up doing. For now I've set up the second port to automatically get reconfigured based on the IP of the primary port. That too took some effort (NIs System Configuration API came to the rescue) and it's not a good replacement for DHCP...but it helps.
-
We have embedded a cFP-2220 into an instrument that should have two network interfaces, however both of them *have* to be using dynamic IP addresses, and this is not supported by any of NIs controllers. The secondary port is always static. Adding a second controlelr just to get two Ethernet interfaces able of running DHCP is not a good solutionI'm thinking that perhaps we could find a router (?) on a tiny board - that could act as a DHCP client in one end, and NAT the traffic to the secondary port of the controller - set with a fixed IP... (Or we could have only the primary or both ports on the controller connected to this router card. The important things is that from the outside it should look like a device with two different NICs, running DHCP on two different networks...but on the inside it could be just on interface (or two if necessary, but it has to be static IPs on that side anyway then). Does anyone know of a candidate for such a solution, or have other suggestions on how to solve such a challenge? The "router" should be small(er than the content of a PAC) and energy efficient (more than a PAC which typically uses 3,5-4,5 W). PS. In reality the devices should only use dynamic addresses when there is a DHCP server available; they need to have a special(!) feature that makes the fall back to the previously received address if the DHCP server is unreachable...but that is a secondary isse. We have been able meet that requirement for the primary port (see discussion here), but with the second port/an external router that will become an issue again....
-
That did the trick, thanks François
- 54 replies
-
- alignement
- dialog
-
(and 3 more)
Tagged with:
-
I have installed UI Tools 1.1.0.9 under LV2011 and seem to have all the other packages that it depends on installed, but the state machine VIs still seem to be missing. When I start the Control Generator it starts to look in <userlib>:\_LAVACR\UI Tools\_lava_lib_ui_tools.llb for Add State(s) to Queue_jki_lib_state_machine_lava_lib_ui_tools.vi but fails to find it...not surprisingly since that llb does not exist. I have JKIs state machine (2.0.0-1), but these seems to be renamed copies of some of the VIs in that package. I could manually replace the missing VIs with VIs from the JKI State Machine package, but then I would need to do this again if there is an update to the UI Tools so it would be better if I figured out why the problem occurs. Any idea?
- 54 replies
-
- alignement
- dialog
-
(and 3 more)
Tagged with:
-
"The front panel window must include a National Instruments copyright notice. " That must be a joke. How many of you do that? I could always find a place for it in an about window (although I do not see what NI should be doing there in our app regardless of the fact that it was developed in LV), but on the front panel in one of our commercial apps, no way.
-
Is the portal view gone, or am I just unable to find it? Is it meant to be the "View unread Content"-page? Having one page to go to to see an overview of the latest activity (wheter read or not) across the whole site was very practical.
-
I'm working on an extension to the modbus rtu protocol. The goal is to make a TCP/IP to Modbus RTU gateway that will allow us to use an existing network client (or at least as much of it as possible) as a user interface for an instrument that in this case only will be reachable by serial communication via an acoustic modem (i.e. low bandwidth+high latency). The instrument runs LabVIEW RT on an embedded fieldpoint controller.
-
Does anyone know if/when/where the 2011 Platform DVDs will be downloadable/out? Creating Volume License Installations gets very messy if we have to base it on anything other than the platform dvds...
-
Most of the software I make have different levels of access. I also need to allow the customer to turn off all file IO and print options. Logins should time out after a while in case someone has forgotten to log off. One of the easiest ways to achieve this is to include a key in the tag of all menu items with restrictions, buttons etc- and then either automatically grey out and disable these (you can write a generic VI that scans the GUI for these when a window is opened) depending on the access of the current user, and/or filter any events if the key in the tag/name requires elevation. How easy this is to implement in your code depends on how the GUI is handled. I typically put the GUI handling in separate VIs if it is of a certain size and complexity, and this makes it easy to apply access rights checks on incoming events. A user configuration panel is nice to have, unless you get those by other means. I have one that allows me to give users local and/or remote access to the system. The in-built server then checks the user list if it gets a session request from a remote client. The same GUI offers the possibility to turn off user checks locally, in case the server is not running anything else and is sufficiently protected by Windows.
-
I did not include the scope in the test. To use the variant attribute to get the sort and search we need a string key, not a cluster. This should be easy enough to fix though, either we could add the scope number to the name, or by flattening the cluster. I have not tested that yet though. Now all you need is a script that updates all the polymorph VIs :-) There is no hit on the constant name lookup as a check for that precedes the search in the list of refnums.
-
As before in 1.1 when the linear search was used to find the register refnum. The linear search is just a bit slower in absolute terms, but with 10000 registers this slowness has extra impact due to the combination of having a large list to search, and having to do it so many times... In the test I changed the register name on every call so it is a worst case scenario. The very first run takes a bit longer than the quoted times because of the array build though (that part could perhaps be sped up as well by expanding the list in chuncks instead of on every call, but that's a minor issue).
-
Yes, the way polymorph VIs work in LabVIEW I can definitely understand why you do not want to update the VIRegister library again. In most cases the current implementation will be just fine. Thanks again for the work. I did run a quick test to see what kind of performance I could get if needed though. To simplify the change I skipped the scope part and just used the register name as the reference. Updating 10 000 booleans with the same node used to take 1,5 seconds, now it runs in 39 ms.
-
Thanks for the update . One first comment: Calling the same node with varying register names works now, but it is very slow. If you use a variant to store and look up the index of requested queue (instead of using the standard search function) the use of the cache will be much quicker.
-
You do not need to open the block diagram, just run the Lock.State method on the VI. You can run the Get method first if you want and if it is locked you can run the set method with unlock as the new state and provide the password you have... If the password is wrong you will get an error, if it is correct you can lock it again if you want and go to the next VI.
-
Describing architectures can be quite a task. I'm sure the designers behind the CVT e.g. could mention some intended use cases for you, but OK. Let me first give you an overview, and perhaps I can describe something more concrete later if necessary: I typically make different types of monitoring systems that consists of a server that runs 24/7 on an oil rig somewhere and interfaces with the local control system - and remote clients that connect to that server to monitor the situation or do analysis and reports. The server (on a PC or PAC) is configured to poll data from a number of sensors through different types of communication interfaces, it does a number of calculations on the data (new calculations can be added during run-time), logs the trends and system alarms/events - and offers remote configuration, current values, and historic trends to a client counterpart - or via web pages through a web service. A bit like a SCADA/DCS, but more specialised. Most of the applications came in their first version in the late 90's. Back then a main part of the server was its modbus interface to the main control system. All inputs and outputs of the system would have a place in the modbus registers that were read or written to by one or multiple control systems. The existence of such a central and globally addressable memory space made it easy to use it to share data not just with external units, but internally as well. However this would mean that everyone needed to know the modbus address of the data they needed - or be able to use a middle-man that would know that address if given a name e.g...Using modbus registers for something like this is not exactly ideal as there are data type restrictions, scaling is involved etc. but these are obstacles we could live with. I've since then moved away from using the modbus register this way as I do not want to couple things like that, but the idea of having a centrally accessible register of data lives on... If a new calculation or device is coming online it typically needs access to data which may come from any part of the system. If all parts of the system had an interface on its own which allowed others to poll data from it when needed (and ideally this would be a generic interface so that it would be supported by all others), then that might be a way to go, but they do not (some could be redesigned, but it is not very practical to impose such a requirement on all components) - and even so the consumer should not need to know anything about who produced the measurement, just the tag (and data type) of the measurement. The only thing that can really be made sure is that the current data will be available in memory in a "database", and historic values will be available from disk in a log-file/database. So let's say e.g. that the customers asks us to use data from sensor x, two values coming in via OPC, a value from the modbus link to system Y...and produce a calculated value from those variables that is accessible by all other parts of the system. We do this by providing a middle-man which can take in a list of tags and outputs the data we need, and the calculator writes back the results to the same middle-man by providing a tag and the value. This way the calculator (or other consumer/producer) only needs to be configured with a set of tags and a formula. If only current values are required (most of the time this is the case), the middle-man is an equivalent to the Current-Value-Table, which is what I'm proposing that NI develops to be as efficient as it can be because implementing it with the tools we have available now results in a number of performance issues (as discussed here related to the VIRegister and on the CVT site). I've also described this as a type of global that you can create when needed and access by name, but that description might confuse some.
-
I think Steen sums it all up pretty nicely. My own two cents: If you cannot lookup globals programmatically by their name (basically a simple internal database) then the only value added by such a library seems to be that you can add some flow control to the global (error terminals e.g.) and can hard-code things part textually by typing the name instead of using your mouse. The latter two are both nice features which should be part of the standard globals in LV, but they do not address the need for scalability.
-
I see it was only added as a feature request so I guess that's why it has not been included: http://forums.openg.org/index.php?showtopic=1117 Version 4 does not include the manual solution with a format specifier input on the read section or ini cluster either. However that would not be a good solution because then you would need to open the file and check what decimal sign is used in it prior to running the read * cluster function.
-
When I saw this release and the statement that all reported bugs had been fixed (including one described as a fix for custom decimal signs) I thought that the bug that prevents it from correctly interpreting floating point arrays in configuration files created on another machine with a different decimal sign had been solved as well, but that problem is still there, right? I had hoped to be able to ditch my custom version of Read Key (Variant) that automatically detects what the decimal sign is in the file and adjusts the format specifier accordingly
-
One thing I noticed is that you cannot use one write node to write to multiple registers without risking that the previous registers are lost. As long as no read is opened on the given register the write is the only place the queue ref is open, therefore the queue is destroyed when the write node runs a release when it switches to another register. The release node does not have the force flag set, but the queue is still destroyed because no other reference to the queue exists (necessarily). If this is the case then I think it would be more useful if you the destruction had to be done explicitly. The test scenario that made me see this behaviour (which I did not expect, but perhaps it is intentional?) was that I planned to do a double write to 10 000 registers and then a double read of the same registers, and time the 4 operations. The reads turned out not to return an error, but they would return all but one register with default values (i.e. a DBL would return with the value 0 instead of the value I had previously written to it). Now I only ran through this quickly so I may have overlooked something and gotten it all wrong...However perhaps you can correct me before I come to that conclusion myself :-)