Jump to content

ChrisClark

Members
  • Posts

    22
  • Joined

  • Last visited

Everything posted by ChrisClark

  1. Ok, I'm lame, my file write subvi was writing two files in parallel and I forgot to set it to reentrant. FYI I was able to get at least 80 MBps with one loop, 10 DAQ Reads and 1or 2 File writes in the same loop, binary or tdms. We replaced the NI drive with a WD Scorpio Black, 7200 rpm. Forgot about the benchmark VIs, thanks. cc
  2. Hi, I need to stream 40 MBps to disk, .tdms, on a NI PXIe-8133, quad i7 1.73 GHz from 10 DIO cards. I understand that 40 MBps is roughly the max of what the SATA drive will do. I'm getting at least 30 MBps. Is the 40 MBps limit assumption good? I think I should have a dedicated drive for the data instead of streaming to the controller drive. RAID is surely overkill. Has anyone connected an external drive to a NI controller with the ExpressCard slot? Any other ideas? I think my producer consumer code is ok. cc
  3. Hi, You could go through examples in the examples folder, there is a manual called using LabVIEW with TestStand. Definitely call tech support multiple times a day. The best way is to take the TestStand I and II classes. You might be able to just purchase the course materials, workbooks and exercises from NI. TestStand is really a unique (and awesome) product and it has a higher threshold of getting started than LabVIEW. I think I spent 40 hours just getting a GPIB instrument working when I first started on my own. IMHO it's best to get help, either through cust ed or the AEs. cc
  4. Used part of my PayPal balance, woo. I figured out how to bill someone with an Amex card through PayPal, but haven't yet gotten to getting the $ back out. Another 30 or 40 LAVA BBQs should do it. cc
  5. Hi, I can't seem to depress the control button and drag to either expand/scoot the diagram, or to make a copy by dragging. Does anyone know the Fusion settings to allow this? Bonus: in Bootcamp shift-ctrl-a does not align. (BTW I'm going to try out LV Mac eval) Thanks cc
  6. I want to restrict access to a web page that displays live data from a LV web service. Should I use authenticated HTTP requests as described in tutorial 7749, LabVIEW Web Services Security? If so, do I need to use scripting in the web browser client to create the HTTP request? The tutorial just describes a client app, not a browser. Or is there some other way to do this, like just logging in to a service with an ID and a password on the first request. Or would it be easier to achieve this goal by using Remote Panels? Thanks. cc
  7. Hi, I've inherited a .dll that analyzes a 2D array containing 4 waveforms, 32KB to 80KB data. The main LabVIEW VI is continuously streaming the 4 DAQ channels, analyzing, and displaying, everything executing in parallel. At higher sample rates the main VI becomes starved and processes running in parallel to the .dll slow way down, up to 30 seconds behind. I've attached a graphic of the task manager that shows an asymmetry in the cores of a Core2duo while the VI is overloaded by the .dll. I always see this asymmetry in the Task Manager when the Main VI has big latencies. I've seen this exact behaviour before in a different VI, different project, when LabVIEW math subvis were coded serially instead of in parallel. Once the VIs were rewired to run in parallel, everything ran smoothly with balanced cores. My challenge now is to convince someone else to refactor their .dll, and they think the best approach is to optimize the single-threaded .dll code to make it run faster. Do I have all my options listed below? What is my best argument to convince all the stakeholders to go with a solution that balances the analysis cpu load across cores? (and is this really the best direction to take?) Thanks, cc Options: 1. port .dll to LabVIEW 2. refactor .dll to be multithreaded and run on multiple cores in a balanced way 3. Mess around with subvi priority for the subvi containing the offending .dll 4. refactor .dll to work faster but still only run in one thread on one core.
  8. A while ago I heard that the GPIB primitives now are wrappers around VISA functions anyway. You could check out ni.com/idnet for instrument driver-y development info and style. cc
  9. I've done something like this in the past. This is a sort of pattern where the execution is sweeping over the nested setup variables and doing the same test over and over, REPEAT -> setup all vars then run test. If you are doing this all in LabVIEW only, I prefer to separate the execution from the "generate the correct commands in the correct sequence." So you already figured that out. You can generate the correct commands and then put them in a cluster array instead of dropping them all on your state machine que. You then have a big stack and your state machine can just pop the next step and execute it. The state machine is flat and not nested so you can pause, resume, jump to the end, etc. Each element on the stack could be a cluster of VI name, VISA resource, variant of parameters. Then the state machine would use VI server to load each VI and execute it, or have a case for each VI. You could use a LV2 functional variable or LVOOP class to encapsulate your stack with the following methods, load stack, pop element, reset to start, peek, check for end, etc. So you would run an initialization with the nested loops, or equivalent, to generate the stack and load it into your object. You mentioned this will take up a lot of memory. For several nested variables you could generate a deep stack. A cluster array of 10,000 elements and 10 MB of memory doesn't seem too big to me. Clearly though you may cross a threshold of too much. In that case you could load only the input information for every voltage, for every temperature, for every channel, for every data rate, take power measurement; into the object and use a linked list, or some other data structure with a pointer to where you are in the sequence. Then instead of popping the next element from a big stack, you would generate the next element, one at a time and never have the whole stack in memory. This takes more thought than the nested loops initialization taking up lots of memory approach. Either way you just have one class or LV2 global that inits and pops the next test, all encapsulated. cc
  10. If you need SNMPv3, and the agent you are talking to has authentication and/or encryption turned on, it will take you some weeks of work to write that code. Using Net-SNMP with System Exec.vi seems like your best free option, though I have not tried this personally. SNMP Toolkit for LabVIEW at snmptoolkit.com costs $995.00 but is a native LabVIEW Toolkit with one .dll for MIB compilation and a second .dll for SNMPv3 encryption and has been used successfully for many v3 projects.
  11. JG Thanks for the link, I !@#$%^ searched but did not find anything. > See here as this was asked before in the OOP forum My app has an asynch part and a deterministic part so I don't need deterministic classes. I've used the original Endevo LabVIEW 6.1 toolkit before I might just go with that. Thanks again. cc
  12. I'd like to have a few classes on an embedded app. Is anyone using Endevo GOOP or anything else on LVRT? cc
  13. some "marketshare" numbers http://www.macrumors.com/2008/07/02/mac-br...tinues-to-rise/
  14. jeez, seeing how this thread is going I guess I would look pretty square if I said I forgot to mention we've hooked up USB Zebra and BradySoft printers to the MacBook with LabVIEW for Windoze and they worked as well. I did send the Steve pic to my wife who always hassles me for wearing that exact same outfit, but with Mizunos and Asics. Thanks! cc
  15. Herez what USB things we hooked to a MacBook (not pro) running XP under bootcamp: Atmel AVRISPmkII in-system programmer Ember USBLink programmer some custom zigbee radio on a USB stick, maybe SiLabs (?) and an Ember Insight adapter via ethernet Everything worked first time with LabVIEW using SystemExec and VISA and is deployed overseas. This was the test I have been waiting for - what happens with resources. I figure I can go with Bootcamp in those cases there is some weirdness with Fusion. I have a friend that says ctrl drag copy does not map in Fusion. For development I don't usually connect my laptop to that many resources, VISA and TCP goes a long way. I am now waiting for the next rev of MacBook Pros to come out and after that I will never buy a PC for myself again. I hope the next rev of 15" will have a higher resolution, the 17" seems like a lot to lug. Whoa, I assumed I could run XP (Fusion or Bootcamp) with two monitors, better check that. I'm watching the Macrumors.com buyers guide for next rev news. And it seems like XP should be good for the next couple of years.
  16. I need an installer that will remove an existing .exe installation (one .exe and one .ini file) and then install an exe with a different name in a differently named C:\Program Files\AppX directory, and copy the original ini file to the new dir. (I'm supposed to do it in LV 7.1 but 8.2 would be ok.) If I build an installer with the same product code as the original and make the user go throught the two pass uninstall, install procedure everything works except the original .ini file is deleted before I have a chance to "run executable after installation" to copy over the file. In this scenario I need a "run executable BEFORE installation." Do I need to get a professional installer package or is there a way to do this with the LabVIEW installers? I see there's a package builder from OpenG, not sure how it works.
  17. I have a LabVIEW 7.0 TCP server exe that monitors a device and responds to TCP messages from remote clients, one client at a time. There's been one report of a LabVIEW 7.1 TCP Client intermittently getting a 63 error from the server after several hours, the TCP Client is sending a status request every 4 seconds. I am familiar with the 63 error and it's cause. But in this case, what component in the communication chain is causing the error? The way I understand it, the LabVIEW TCP session looks like, LabVIEW TCP (server exe) <-> Windows IP stack <-> network <-> remote windows IP stack <-> LabVIEW TCP (client). If the client sends in a packet and gets a error 63 response, is that caused by windows, or the LabVIEW application, or network conditions, or any one? (I think the LabVIEW client code should be designed to trap the 63 error and re-transmit the request packet. )
  18. Intermittently, when I click on the broken arrow to find syntax errors I get an empty list! Sometimes the empty list contains broken arrow items from other VIs I have open. Holy mother of God I think it was LabVIEW 2 or 3 or something when I last had to hunt down my mistakes by myself - it is PAINFUL. I have LV 801, first I mass compiled with Jim Kring's mass compiler, then again with the LV mass compiler. I have a new Latitude 610 and I load very little software other than NI and the Microsoft viruses. If I'm the only one, yow. cc PS LV 801 still crashing at least once per day, I was at an NI Developer Day last week and it crashed up front on the NI guy.
  19. I think shared variable would be most useful for messaging if it had the same behaviour as a LabVIEW que. When will it be updated to return an empty indicator/error (or sleep like a que) and an overflow error? Has anyone heard anything about these features?
  20. What's the latest for messaging between VIs on different machines? Can I use message ques across machines, or would I have to use TCP or something else, for example drop a message on a que on machine1, that que is serviced by a TCP loop and transmitted to machine 2, where it is received and then dropped on a que there. I'd probably have a qued producer consumer setup on both ends. I know I've read something about this but can't find it now. related question: I've been following the form of D. Elizalde's "Simple TCP/IP Messaging Protocol" for LVRT messaging, how has Shared Variable change this approach for anyone? cc
  21. I'm working on a large LabVIEW toolkit that will need several families of icons, somewhat math-related. Is there a book, link, software package that would be helpful not so much in creating the icons, but designing pictures that relate to the VI functions? I'm aware of the icon section on IDNET.
  22. NI style guidelines recommend the diagram fits on one screen. Mine usually drift off to the left and right a bit so mine are ~ 1.5 screens wide, but only one high. lately I'm on 1400X 1050 and 1280X1024 monitors. I personally loath large diagrams (2-4 screens wide and > 2 tall). For me it is significantly harder to understand someone else's code when it is a large diagram.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.