Jump to content
Sign in to follow this  
rharmon@sandia.gov

Daemon VI?

Recommended Posts

I need to setup what I think will be a non-reentrant daemon vi, I'm writing a Top Level VI that will need to move hundreds of files from 10 computers to a single computer for processing. I want to move these files in the background not effecting the the top level vi until the moves are completed.

As I envision it, I will launch 10 vi's from the top level, each assigned to move files from a computer to the top level computer. I'm hitting a brick wall on how to build this.

Does this approach make sense?

Is there an example somewhere that explains how to set this up?

If you wanted to accomplish this what approach would you take?

Thanks for your thoughts in Advance...

Share this post


Link to post
Share on other sites

Sounds like you're going to want to launch the background VIs with Start Asynchronous Call. You might not want to launch 10. Maybe just loop through the 10 computers in a for loop. This is especially true if your top level computer is using a magnetic disk drive. You could try and switch the for loop to a parallel for loop later. I'm not sure what your file goals are, it's sort of an unusual use case and maybe setting up some sort of server on the top level computer possibly a database might be the way to go. You're probably going to want to go through the folders recursively unless it's a flat folder. Maybe you can look at timestamps to see what's modified.

Share this post


Link to post
Share on other sites

Thanks infinitenothing for the input... Here is some more info...

1. Files are copied to a RAM drive on the top level computer. I know that SSD's are about as good today, but when I first set it up the RAM drive was the best.

2. I currently use 10 reenterant vi's running in parallel and the files get transferred as quickly as the top level can store them. Haven't run into trouble with this option yet. but I want to do other things on the top level vi while the transfer is going on.

3. The files are contained in a single folder on the computer and the entire folder is moved to the top level computer.

4. Database is a possibility, I would still need to look into.

Still trying to determine what the best options are today because I'm re-writing the main top level vi

 

Thanks again.

 

Share this post


Link to post
Share on other sites

What if the client computers pushed the files onto the top level computer rather than having the top level pull? That's a similar concept to a database but not quite as well organized.

Like I said, it's really easy to move from a sequential for loop to a parallel for loop. I'd experiment with both. The Async is what lets "other things" happen.

 

Share this post


Link to post
Share on other sites
2 hours ago, rharmon@sandia.gov said:

Thanks infinitenothing for the input... Here is some more info...

1. Files are copied to a RAM drive on the top level computer. I know that SSD's are about as good today, but when I first set it up the RAM drive was the best.

2. I currently use 10 reenterant vi's running in parallel and the files get transferred as quickly as the top level can store them. Haven't run into trouble with this option yet. but I want to do other things on the top level vi while the transfer is going on.

3. The files are contained in a single folder on the computer and the entire folder is moved to the top level computer.

4. Database is a possibility, I would still need to look into.

Still trying to determine what the best options are today because I'm re-writing the main top level vi

First, as mentioned I'd use an async call per target. You can always parallelize the calls internally later at least launching per target doesn't mean you get hung up if one is slow. If you search example finder for 'async' I think there is a good example that either is, or used to be called benchmarking asynchronous calls. It talks to a set of web servers over http and demonstrates the performance advantages and disadvantages of each. keep in mind while looking at the code that fetching google.com is a different profile than fetching 100 100 mb files, but the example is still good.

Then you need to decide on your transfer mechanism. I think you can mount network share drives and have windows copy files for you, but I'm not 100% sure and I've no idea about performance. The other good APIs built into labview are FTP, HTTP, and webdav. For HTTP, I've used apache, and for ftp I've used filezilla. I've never set up a webdav server but its basically http and appears to be built into windows. Each protocol has its ups and downs...

HTTP(s): High overhead, not probably a big deal with a dedicated server, a closed network, and large files. The biggest issue is the sequential nature (request-response), meaning you need to create multiple connections and request in parallel just like your web browser. This is where the parallel for loop can come in handy. Note that each Handle in the API has a mutex of some kind so you have to create N handles, rather than calling N parallel requests on 1 handle. Another issue which you may or may not hit is that HTTP uses dll calls, and each dll call blocks the thread you're running in. If you have too many outstanding requests, suddenly your application locks up until one of them completes.
FTP: The functions are old and you probably don't want to look inside but they work and are pretty quick. Has similar issues to HTTP in that theres a good amount of overhead for every file. The API, if I remember correctly, has a function called get multiple files which literally just fetches the files one by one in sequence....so you'll have to parallelize this for good performance too. Just uses TCP calls under the hood, so you don't have the dll lock-up issue.
Webdav: The base functions use DLL calls but you can avoid that issue with the async api, where you just register for events on a set of request. When the file transfer completes, the event fires and you handle it. This is pretty fast and you don't have to do much besides tell it what to download, and it'll do it for you. Not sure how performance compares to FTP, but the individual low-level calls are about on par, slightly slower than FTP in my tests.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.