Jump to content

jdunham

Members
  • Posts

    625
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by jdunham

  1. QUOTE (Aristos Queue @ Apr 12 2009, 06:40 PM) Well I must not have been clear enough. I don't have any problem with unrevealed features. My point was that if someone considered that a serious problem, then any modern combination of silicon, OS, Compiler (which created LabVIEW.exe) and LV Runtime or any other app would be off-limits. I don't think you could ever predict all possible CPU execution paths in anything running on Windows.
  2. QUOTE (scls19fr @ Apr 5 2009, 09:52 AM) Well if that's your only problem, a true circular buffer is not the best answer. Why not just store the data in a flat file (one row per measurement), and make a new file every day (please use file names like Weather YYYY-MM-DD so they sort in order) and then delete every file more than a month old. I'm not sure I understood totally, but it sounded like you wanted to store three numbers once a minute. Together with a timestamp, that's about 50 bytes per minute or 26MB per year, give or take a few MB. Why purge the old records at all? Are you recording a lot more data that you didn't mention? Even recording every second, you should have no problem finding a big enough hard disk.
  3. QUOTE (bsvingen @ Apr 11 2009, 02:08 AM) Well my point still stands. What is the difference between LabVIEW having undocumented features versus running LabVIEW on an OS with undocumented features? In reality, it's just a bunch of ones and zeros running through a von Neumann machine. Since we are well past the days where any person or team can make a useful and affordable computing device with no outside help, there is probably no way to eliminate the possibility of undocumented features. The rest of us understand that presence or lack of undocumented features is not a useful metric of quality in a computing system, because no one has any way to count the undocumented features in the silicon, the os, the drivers, the labview etc, and of which could and maybe do have an impact on reliability. Since you can't even know how much there is, and since plenty of reliability problems (crashes) stem from documented features, it's surely pointless to worry about undocumented features.
  4. QUOTE (shoneill @ Apr 10 2009, 12:26 PM) Oh yeh, thanks!
  5. QUOTE (Mark Yedinak @ Apr 10 2009, 11:15 AM) Of course LabVIEW does sort on more than the first element of the cluster, but yes, it would be nice if you could chose the element order rather than having to rebundle your cluster.
  6. Until a few days ago, the main portal page (http://forums.lavag.org/home.html) would resize properly to fit my current browser window. Now I have to scroll to the right to see the list of latest posts. Did the site get upgraded or otherwise broken? I tried two different browsers (Firefox & Chrome) and it behaves the same.
  7. QUOTE (Ic3Knight @ Apr 10 2009, 07:26 AM) Seems like you could set 0xC0 as the termination character, and then every other message would be empty, and you would ignore them.
  8. QUOTE (Justin Goeres @ Apr 10 2009, 05:03 AM) I totally agree with Justin. What if the compiler used by NI to build LabVIEW itself contained an undocumented feature? What if the operating system contained an undocumented feature, or else the OS used to run the compiler that built LabVIEW? The compiler is just a set of bits which generates another set of bits (LabVIEW.exe) which generates another set of bits (your app) which is controlling the action. Just because their creation is separated in time doesn't mean it's not just one system. Testing is the only way to validate behavior, not any proof about a closed set of features.
  9. QUOTE (asbo @ Apr 10 2009, 03:58 AM) Actually we do that all the time. You take the sort array and bundle it with the index terminal inside the FOR loop. Then you sort this new array of clusters, and put it back into a FOR loop and unbundle the new clusters. Then you have a sorted array of indexes which you can use against your other data. It's still easier to make an array of clusters and sort that, but if you already have the arrays, then use this technique. - Jason http://lavag.org/old_files/monthly_04_2009/post-1764-1239376717.png' target="_blank">
  10. QUOTE (ejensen @ Apr 9 2009, 06:02 AM) No, there's not.
  11. QUOTE (postformac @ Apr 9 2009, 10:33 AM) Aaaagh! On the same palette as "Decimal String to Number", you will find "Fract/Exp String to Number". I'm not sure why these names can't be a bit better, but I guess they go way back.
  12. If you made an EXE, it needs the LabVIEW run-time engine to run on another computer. You can bundle it with the installer during the build process, or else leave it out and download it from the NI website when you run each installation. Even microsoft languages need a run-time, but they are usually bundled into Windows itself.
  13. QUOTE (Mark Yedinak @ Apr 8 2009, 08:45 AM) We coerce to and from enums all the time, and it works fine -- IF your enum and the integer have the same numeric size (# bits). Of course enums default to 16-bit and integers default to 32 bit so you almost always have to fix one. You don't necessarily have to change the data type, but you should always use the numeric conversion functions. I don't mind the occasional coercion dot, except when using enums it's a sign that something is going wrong. Why would you cast the enum to a U16 and then still expect to see the strings? If you need the strings, leave it as an enum, or convert it to a variant. The OpenG functions are very useful, but also know that these work great. http://lavag.org/old_files/monthly_04_2009/post-1764-1239215198.png' target="_blank">
  14. QUOTE (Scooter_X @ Apr 8 2009, 06:54 AM) No, the sorting is easy! As the others mentioned 1D Sort Array works great on clusters. Be sure to read the help about it, and post back if you have difficulty.
  15. QUOTE (jcz @ Apr 7 2009, 07:41 AM) If you use a separate queue for each object, then you don't need to poll the queue status. QUOTE (jcz @ Apr 7 2009, 07:41 AM) I still haven't worked out how to lock a subprogram from receiving any new telegrams (e.g. during initialisation). If you use a separate queue for each object, each subprogram (object) will not process subsequent queue messages while any other message is being processed. This includes the initialization message. By using separate queues, each subprogram can be initializing itself in parallel. QUOTE (jcz @ Apr 7 2009, 07:41 AM) And finally at the moment the telegrams are one-directional only - the only response from any subprogram is the acknowledgement that it finished what it was asked to do. So that for me the telegram is only a message related to some sort of action (e.g. grab image from camera), and the result of this action (i.e. image) is then stored in a functional global which is accessible from main program. If you include a notifier in your message cluster, that gives a mechanism by which the subprogram can return an acknowledgment, along with any data. The caller creates a new unnamed notifier, bundles it into the message, send the message, and waits for the notifier. The subprogram checks the notifier refnum and if it's valid, sends the response, and if invalid, just ignores it (not all queued commands need a response). QUOTE (jcz @ Apr 7 2009, 07:41 AM) The main idea I had was to create a bunch of main programs that would pretend to be a real-hardware-devices. So then I could simply implement some sort of protocol, define commands and talk to each sub-program as it was a real device. For instance to grab an image from a camera, I could then add a command to a queue (in exterme version SCPI-like command) :camera:image:snap, and only camera interface would run this command returning ACK. If you use a separate queue for each object, and a separate enum of commands specific to that object (subprogram), then your subprograms are all totally reusable. In fact I would wrap each bit of queue-sending code into a VI and then those VIs are the public methods of your subprogram. In case I was too subtle, I think it's a mistake to use the same queue for all of your hardware components. Each object should be its own independent queued state machine which can accept commands and do stuff. Otherwise I think you are on the right track. Jason
  16. You need the Application Builder, which costs about $1000 extra or the "Professional Development" version which has the App Builder bundled in. If you search on "labview application build" in the manual, you should find instructions how to do the actual build.
  17. People have been asking to change the wire appearance since before you were born. If you are using a built-in type which is not a cluster, then it would just obfuscate your code to change the wire appearance. If you are using clusters and want a custom wire, why not just make it an lvclass? (well because it's harder to see your values on subvi panels, but I hopefully NI is working on that).
  18. QUOTE (rolfk @ Apr 2 2009, 11:04 PM) I can confirm Mark's observations. I made a very simple LVOOP tree for another purpose and for my testing, I loaded my folder structure into it from my enormous labview project with embedded SVN folders. The tree composed itself in about a second, but to test recursing through the tree and to view the result, I wrote code to put each element into a tree control. That part takes about 5 minutes to run. Defer panel updates didn't help noticeably. I really think dynamically populating the tree is the way to go; I assume that the most file manager shells do something similar. It would be a good XControl.
  19. What about an interface that doesn't try to populate the entire tree in advance. You could trap the item open event, filter it, fix the tree item which is about to open, and then open it programmatically. Most of the time, your users probably won't ever look at most of the thousands of items in your tree, so there's no real need to stuff them into the tree control.
  20. This sounds like a relatively hard problem given the minimal tools you have for this in LabVIEW. If it were me, I would leverage the OS file dialog box. Unfortunately that generally lets you browse any folder on the computer. If it's windows, you could map your folder to an unused hard drive letter and then start the filedialog box at the top of that, and then strenuously object if the user picks a file on some other drive.
  21. QUOTE (Kubo @ Mar 31 2009, 12:20 PM) If your serial port is selected with a VISA control, then you need to deploy the VISA Runtime in order for the control and the VISA functions to work. You don't need MAX.
  22. Jim's right, you want virtual server hosting, which you can google. Here's one for $99/mo: http://myhosting.com/Virtual-Server-Hosting/ If you want to run LabVIEW for Linux, you have a lot more options http://ask.slashdot.org/article.pl?sid=07/04/05/0214232
  23. QUOTE (flarn2006 @ Mar 29 2009, 05:40 PM) Ah, but the difficulty lies in the software. All hardware in a computer is controlled by hardware drivers, and just like any program, LabVIEW needs to invoke the drivers to get the juice from whatever hardware is connected. Usually the hardware manufacturer creates the driver. The driver will expose an API for other programs to access its features. If the author doesn't think anyone needs the feature, he or she (it's probably a team) won't waste money adding the API function to access it. If the authors think that exposing the feature would leak the company's engineering secrets, then they definitely won't add it to the API. So without knowing too much about what hardware you have or what you're trying to do, it seems really unlikely. The RF signal itself is unlikely to be digitized into a stream of bits like the other analog signals LabVIEW users typically digitize. Usually the hardware demodulates the RF signal into something which can be represented by a digital stream at a lower rate. In other words, the radio-frequency voltage waveform goes through lots of non-digital (not computer accessible) hardware processing before it is reduced to the WiFi bitstream. You can read more about http://en.wikipedia.org/wiki/Software-defined_radio' rel='nofollow' target="_blank">software-defined radio if you are interested in the possibilities, but it's pretty unlikely that one of these is hiding inside your WiFi card.
  24. QUOTE (Mark Yedinak @ Mar 24 2009, 07:34 AM) If you get a slow enough machine running, you can see the recompiles as you work. The Run Arrow flashes to a glyph of 1's and 0's while compiling. You can also force recompiling by holding down the ctrl key while pressing run, but it's still too fast on my unexceptional laptop to see the glyph (or maybe they got rid of it). You can also do ctrl-shift-Run to recompile the entire hierarchy, but I still don't see the glyph, even though my mouse turns into an hourglass for a short while.
  25. QUOTE (Antoine Châlons @ Mar 26 2009, 02:05 AM) Rather than computing the week number, you could just start with January 1 of that year, and in a while loop, keep adding 24 hours and checking the week number until it matches.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.