Jump to content

bbean

Members
  • Posts

    251
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by bbean

  1. QUOTE (Aristos Queue @ Oct 24 2008, 12:01 AM) whores, sluts West Virginians Sorry in advance for offending
  2. QUOTE (Neville D @ Aug 6 2008, 06:50 PM) Interesting. Thanks for the tip. I wonder what the command line uses. I guess I could check it out with wireshark. I'm getting a little over 300ms but thats much better than before and acceptable for a user interface. QUOTE (Phillip Brooks @ Aug 7 2008, 08:21 AM) I just went through a similar issue with using FTP to post test results to an off-site server. I found that Active = FALSE ( or Passive) was indeed much faster from the client for LIST and PUT, but the FTP server admin asked me to use Active for security and port range issues. I included a boolean in my higher level code (and ultimately my INI file) to set this based on the conditions of the deployed environment. Active FTP vs. Passive FTP, a Definitive Explanation Here is an example of what I use. I wanted to be able to test the ability to connect to the FTP server independent of LabVIEW, so I created a wrapper for the function "FTP Put Multiple Files and Buffers.vi" that parses an RFC-1738 FTP URL. I can check for access to the server using the exact same string from IE or FireFox. If it doesn't work from those, then I might have a firewall or proxy problem. Download File:post-949-1218111686.vi (LV 7.0) Interesting. I shouldn't have any security issues because its just a local machine going to cRIO. I'll check out your example. Thanks for the link
  3. Hi all is there a way to get a quick directory listing from an ftp path. Tried the FTP VIs in the Internet toolkit and it took around 2 - 3 seconds to return a directory. That seems a little slow. Using the windows command line, I can speed it up a bit (see attached LV8.5 example), but that is kind of a cheezy/non Native LabVIEW way to do it. Was wondering if there was a way with Datasocket or something else I'm missing. B
  4. QUOTE (Aristos Queue @ Jun 27 2008, 09:30 PM) I don't have the cRIO target yet so I can't test these things out. Is there any benefit to the False case over the True case in the Write operation of the attached attempt at using variant attributes to store arrays?
  5. I found the design pattern for a LV RT Local Machine on the website that looks promising (although its a b@#$h to get set up because they don't provide all the links / dependencies on one page). Part of the design pattern uses a Current Value Table (CVT) that facilitates storing, tagging and passing different type data points between the RT target and the host. Essentially you set up all your variables for the application with another NI tool called the Tag Configuration Editor (TCE) and save them to a .tcf file. The RT Target and Host read this file to set everything up. The attached CVT Double shows how the data is stored for single point double tags. However, the TCE currently doesn't look like it has support for arrays of each data type. Under the hood in the LabVIEW code the Tag Configuration cluster has a flag for array and an array size integer, but tagged arrays aren't implemented anywhere else. So... I was going to create an current value table for arrays of data. I would like to keep the CVT for arrays generic so that I can store all tags that have an array data type in one shift register (even if they are different sized arrays). I thought of using a shift register with an array of variants. Each variant in the array would hold an array of say doubles as shown in the attached method 1. The thing I'm worried about is memory allocation in RT on the "read" since the Variant to array conversion won't be able to use the replace array and have a preallocated array size. So the only thing I can think of is to use method 3 and build a specific shift LV2 style VI for EACH tag that has an array data type. This does not seem very flexible though. I would prefer to use Method 2. Any suggestions?
  6. QUOTE (Shaiq Bashir @ Jun 19 2008, 12:32 PM) Why don't you just parse the string to get the two numbers out
  7. QUOTE (Yen @ Jun 19 2008, 02:38 PM) Thanks for the link. I guess thats what I was wondering, are strings just like arrays in terms of replacing vs building in LV RT. In other words can you preallocate them to a fixed length and then use the replace substring like you said to avoid memory allocation everytime you update. QUOTE (neB @ Jun 19 2008, 09:03 AM) THe built-in web-page support is still a "Risk factor" in my apps. My customers that have implemented very simple pages for monitoring have reported success. The very complicated GUI's should not be assumed to work without effort or resources. The methods mentioned by Neville (maybe not shared variables, I don't they are ready for prime time yet) are approaches that I would turn to if I wanted to be comforatble with the success, since I can control all of those factors. So include some experimenting with your intended GUI early in your development. You don't want to have to re-write your app after you find that opening the web-page kills your app. Just trying to help, Ben Thanks for the tips. I'm also looking at this reference app from the ni website for architecture ideas: http://zone.ni.com/devzone/cda/epd/p/id/5336#0requirements
  8. QUOTE (Neville D @ Jun 18 2008, 06:26 PM) I guess I'm lazy and would like to avoid writing a client LV app if we could use a web browser. Don't know if I'm trying to fit a round peg into a square hole, but wanted to get feedback from others. Wouldn't I have some of the same issues with memory/buffer allocations using a shared variable string? I realize they would help with "keeping it real time" because they have a lower priority. thanks
  9. QUOTE (neB @ Jun 18 2008, 04:04 PM) Thanks Ben. Good to know about your experience with strings. I'm using cRIO. Not sure if that makes a difference one way or another. What has your experience been with Remote Front Panels (RFPs)? I guess I'm trying to avoid making a custom client application to communicate with the cRIO app. I'd like to keep the architecture simple if possible. And since RFPs handle all the dirty work of connection management and display, I thought it be easiest to use them. But the event log (and data log) presents a memory / keeping it Real...Time issue. I guess another option would be to do a LV client app with a RFP on the front panel of the diagram for the user to change setpoints/monitor current analog values and additional charts, graphs, strings to display RT unfriendly data. These indicators would then be populated, by reading the log / data files on the cRIO filesystem via FTP.
  10. I have to provide a list of events that have occurred during execution of my run-time application to the user. The user will access the RT app via Remote Front Panels. The events are described by a timestamp and a string. And don't occur that frequently (once a minute) For example: 10:30:04 - TEMP ALARM - ZONE 1 - TEMPC = 30 10:31:04 - POSITION MOVE - 33 mm etc etc I'm trying to determine the best method to store and display the data in LV RT. Here are the options I've thought of: Datatype / Storage Method / Display Method 1) string / FIFO string buffer of 100 lines LV2 global / string indicator on FP (limited to 100 lines) (I know there really isn't a front panel in LV RT) 2) Cluster of timestamp & string / 100 element queue / array of datatype on front panel 3) string / event file / string indicator on FP (limited to 100 lines) If I use 1 and (a string indicator), I'm worried about it sucking up CPU resources and memory to limit the size of the string to 100 lines. For option 3, I thought I would read the last 100 lines into the string indicator on the FP everytime a new event occurred (it has to be written to the file anyway). Any suggestions?
  11. QUOTE (achates @ Jun 17 2008, 06:16 AM) here's one way to do it.
  12. I know this is probably a long shot but....Does anyone developed a LabVIEW driver for a REX F900 digital controllers from RKC instrument Inc? I checked on ni.com but all I found was an OPC server for Win XP. http://www.ni.com/opc/opcservers.htm The application I will be developing is targeted for LabVIEW RT.
  13. Is your problem that you do not know what the data ready message is? You could try running a USB sniffer like: http://benoit.papillault.free.fr/usbsnoop/doc.php.en while the vendor software is running to find out what the message is?
  14. QUOTE(Aristos Queue @ Dec 12 2007, 12:51 PM) Thanks for the description. Whats the point of using the waveform datatype (when you have multiple channels) then? If everytime you need to look at the data for an individual channel, a copy of the data is made (even if you are not altering the data). The LabVIEW compiler isn't smart enough to deal with this issue? :thumbdown: Does anyone have tricks for using waveform arrays or should I go and convert all my DAQ code to 2D arrays?
  15. Why does indexing an array of waveforms create a buffer allocation, but indexing a 2d array does not?
  16. It looks like you are missing the VISA libraries: http://joule.ni.com/nidu/cds/view/p/id/831/lang/en Download and install them on your machine or use your CDs
  17. QUOTE(yogi reddy @ Nov 20 2007, 08:05 AM) Stop cross posting http://forums.lavag.org/large-data-transfe...8861#entry38861
  18. QUOTE(yogi reddy @ Nov 19 2007, 12:22 PM) Try using this framework Simple TCP/IP Messaging (STM) Component http://zone.ni.com/devzone/cda/epd/p/id/2739 Command-based Communication Using Simple TCP/IP Messaging http://zone.ni.com/devzone/cda/tut/p/id/3098
  19. QUOTE(jpc @ Nov 16 2007, 05:53 PM) Can you post the code where the DB is called?
  20. Is this happening on the server side? Are there any LabVIEW applications on the server? Many Database Servers, MS SQL Server for example, will hog memory up as they go. The memory won't be released by the DB until another application requests it. I believe they do this so they can store queries and frequently executed statements in memory for faster exection. Check to see if the Oracle DB has some way of limiting the maximum amount of memory it uses. Regards
  21. I've found this activex grid to be quite useful where the builtin mcl control in labview lacks: http://www.devexpress.com/Downloads/ActiveX/XQuantumGrid/ If you can stomach using activeX.
  22. QUOTE(Karl Rony @ Oct 25 2007, 10:05 PM) You are right. I was just trying to show him a typical insert with SQL Server that I could throw together quickly. I downloaded mysql and the odbc driver and ran my program again and it is definitely slower with mysql. My hunch is that the ODBC driver actually uses Http instead of shared memory, but that is just a guess. Maybe tweaking the connection string/mysql setup/ stored procedures would help. The wierd thing is that my CPU usuage hovered around 2% whereas with SQL Server it hovered around 40%. Here are the results: median insert : 9ms min 7ms max 250 ms
  23. Here you go. On my computer: Dual Core 2.2 ghz 2mb ram SQL Server 2005 I can insert 10,000 records on average about once per millisecond with max cpu usage on both cores of around 40%.
  24. QUOTE(tmot @ Oct 23 2007, 08:30 AM) Can you please post a connection string for the "mattias" database ? Have you tried a stored procedure yet? If you have anti-virus software running, you may want to turn that off and see if things speed up any. Brian
  25. One other suggestion: Since your app and the db are on the same computer, use a "shared memory" protocol instead of TCP/IP if you haven't already. Also as mentioned above, 1) Separate DAQ and DB Loops 2) Initialize connections to DB outside loop 3) Use stored procedure for inserts Can you post a snapshot of the code? and a SQL script of the create database commands?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.