Jump to content

ShaunR

Members
  • Posts

    4,914
  • Joined

  • Days Won

    301

Everything posted by ShaunR

  1. A surprising result, although I am suspicious of you equating CPU utilisation with throughput performnance
  2. Yes.
  3. I was not aware of this function either (still using 2009 whenever I can ), How big are your images? This is how I would approach it. It is the way I have always, with high speed acquisition and have never found a better way even with all the new fangled stuff. The hardware gets faster, but the software gets slower Once you have grabbed the data, immediately delete the DVR. The output of the Delete DVR primitive will give you the data and the sub process will be able to go on to acquire the next without waiting. The data from the Delete DVR you copy/wire into a Global Variable (ooooh, shock horror) which is your application buffer that your file and UI can just read when they need to. This is the old fashioned "Global Variable Data Pool" and is the most efficient method (in LabVIEW) of sharing data between multiple process and perfectly safe from race conditions AS LONG AS THERE IS ONLY ONE WRITER. You may need a small message (Acquired-I would suggest the error cluster as the contents) just to tell anyone that wants to know that new data has arrived (mainly for your file process. Your UI can just Poll the Global every N ms). The process here is that you only have one, deterministic, data copy that affects the acquisition (time to use those Preferred Execution Systems ; ) ) and you have the THE most efficient method of sharing the data (bar none) but - and this is a BIG but - your TDMS writing has to be faster than your acquisition otherwise you will lose frames in the file.You will never run out of memory,or get performance degradation because of buffers filling up, though, and you can mitigate data loss a bit by again buffering the data in a queue (the TDMS write side, not the acquisition) if you know the consumer will eventually catch up or you want to save bigger chunks than are being acquired. However, if the real issue is that your producer is faster than your consumer; that is always a losing hand and if it's a choice between memory meltdown or losing frames, the latter wins every time unless you are prepared to throw hardware at it... I've used the above technique to stream data using TDMS at over 400MB/sec on a PXI rack without losses (I didn't get to use the latest PXI chassis at the time that could theoretically do more than 700MB/sec ).. The main software bottle-neck was event message flooding (next was memory throughput, but you have no control over that) and the only way you can mitigate it is by increasing the amount you acquire in one go (reduce the message rate) which looks much, much easier with this function.
  4. Description or fact? .
  5. I wouldn't. It's LabVIEWs weakest domain and there are far better tools out there that they will already be familiar with, which LabVIEW can't touch. Worse than that, though is that Webservices only run on Windows and I will guarantee 99% of their work is on Linux web servers using Apache and to a lesser extent, Nginx. There is not a lot you can answer to "how do I integrate LabVIEW with our Apache server?". However. You can gloss over all that, just don't "demonstrate" web servers/ apps! Instead you can show them one of LabVIEWs strengths (such as Vision),and say "we can also make it available on the web" -TADA! (without going into how, too much ).
  6. Have you tried just updating only the visible areas (I expect 200 columns is not all presentable on screen at one time). MJE demonstrated a Virtual MCL that mitigates the performance impact of cell formatting of large data-sets and the Table control has a similar API. I also understand that the performance of these controls was vastly improved in LV2013.
  7. Maintainable code is not really quantifiable-it is a subjective assessment. All code is maintainable, it's just how much effort it requires. Even a re-factor (euphemism for a re-write) is a form of maintenance. Good coding practice and style can go along way towards making the life of a programmer easier but, the crux of the matter is that it can look as pretty as you like and you could have filled out every description and hint but if it doesn't work; you won;t get paid and you won't be asked to come back. . Therefore it cannot form the basis of a performance or coding metric for the purpose of quotation or deliverable. It's a bit like "future-proofing" in that sense. Additionally, only programmers care about neatness because they are the ones that will be required to maintain it. A project manager just wants it to work and it's your (my) job to make sure it does even if the wires are out by a pixel or two. So I like the grading scheme here because it will be a good indicator that they can write working code under time pressure (like the day before delivery ).. programmer [proh-gram-er] : noun 1.a person who converts caffeine into computer programs.
  8. Good news.Rolfs HTTP library does support Proxies! (without authentication). The parser doesn't include the HOST in the header, though, so you should add that (trivial change). Servers have tightened up their security in recent years and the Host field is mandatory on most servers nowadays...
  9. No. What you are describing is merely prepending a sub domain name. Whilst sometimes people put a proxy on a sub domain, it's not a requirement. Besides, you may need to authenticate with the proxy. Under normal conditions, the GET request URI is usually a relative path (doesn't have to be but is usually the case) and the HOST field of the HTTP header contains the domain of the target page. It's is slightly different with a Proxy. The GET URI is the full URI of the target page (including the http:// and the domain name of the target page). The HOST field is the domain name of the proxy server and you connect to the proxy server, not the server that has the page. A proxy may also require authentication and these parameters are also sent in the HTTP header fields (see section 2). I don't believe any of the LabVIEW web oriented VIs support Forwarding Proxies (the sort I think you are describing) out of the box. I may be wrong and they added them in later versions, but I haven't come across any. You might try Rolfs HTTP VIs, I can't remember off-hand if they support proxies and the OpenG site is down ATM so can't check, Apart from that I expect you will have to hand craft these headers and handle the responses the old fashioned way (and you will be stuffed if it is SSL/TLS).
  10. All of them want working code, on-time and on budget. Simples.
  11. Depends on your approach, I suppose. Or, more specifically, how much time you have. I haven't met a customer yet that would say yes to me billing for more time to make the diagrams look better or fill in all the VI descriptions/labels. I tend to throw stuff at the diagram, get it working, then make it look pretty. In fact, when faced with a particularly knarly problem, I will go around and fill in descriptions, labels and make icons as a distraction. It fits with an iterative development better as you can make it look better with each iteration, as long as it works. Often, as more features are added to diagrams, they need re-prettifying as the feature list increases so making it pretty off the bat, is a bit pointless. But here we are talking about an exam which is designed to be time stressed and given that the purpose is to certify coding competence, not the examinees graphic design skills or obsessive/compulsive tendencies; I think this emphasis of marking is more fitting. If you have time at the end of the exam to make it easier to read for the examiners, great, but if it's that bad they can press THE button. However, working code is a better yard-stick for coding competence and debugging capabilities in a time constrained environment, IMO (at least for a CLED), and that's what employers want. The Architect cert is probably where how pretty it looks is more relevant (more a test of communication than CLED), once you've proved you can write the code first.. But what do I know! I've no certifications at all
  12. Depends what you mean by "easy". Write a pascal script to check in the registry for the Run-Time and if it doesn't exist invoke the run-time installer. If you have the download plugin, you can even get it to go and fetch the installer from NI, then run it.
  13. A much better grading scheme IMO. Employers pay you for functioning code, not whether you fill out labels or descriptions. It's only really a necessity for toolkit developers.
  14. ......and you stopped showing the top 10 like count on the main page the day after I got to #2
  15. The issue isn't so much accessing SQLite on network drives; I's concurrency. I can quite happily read and and write to my NAS boxes, but woe betide if you try and share the DB amongst clients. Just for giggles, I ran the Speed example and pointed it to my NAS box (over a wifi connection) and it achieved about 0.5 secs to insert and read 10,000 records. .
  16. There is the possibility that data won't be written to the DB when using PRAGMA SYNCHRONOUS=OFF,on a network share but if that is acceptable then, you should also set the PRAGMA journal_mode = PERSIST The default is DELETE and this severely hinders performance on network drives and increases the possibility of collisions and locking errors.
  17. Then why not have the server write to the CSV file (or whatever) then just import it to a local database for use? It'll be a one hit performance to retrieve then full SQLite performance whilst in use.
  18. This is what the SQLite peeps have to say. They are really talking about a single client, however and you will find it impossible (I think-never tried) to create a shared locking mechanism between two machines. You might as well use a proper client/server DB. There is a compile time option where you can choose the locking mechanism (file based may work better), but you would have to compile it yourself.
  19. The big benefit for developers that I see here is that it may highlight some of the source control issues that we have suffered for many years. It may prompt changes in the core so that we can use these powerful tools (like github) much more effectively. I dare say that once you have 20 branches all pushing their changes, the nightmare of merging, cross-linking and phantom recompiles might be laid to rest once and for all That alone should be enough of an incentive to participate, if only because pushing changes will cause such havoc for the people merging the Master that they have to do something before they tear [the rest of] their hair out
  20. .I have been bitten hard on all the LabVIEW installers (most recently by JKIs). Whilst I love Inno Setip and have used it for many years, I need a x-platform one and they are a bit few and far between (how I wish Inno was x-platform!!!!). Now I'm dissatisfied with all the "preferred" options I'm checking out InstallJammer to see if it is a better solution for my needs.
  21. I have to echo Rolf here. I have never found any source control software to be .adequate for LabVIEW. I just treat them all as backup and restore systems. When working with multiple developers I usually break the project into multiple sub-projects that each developer can own, so to speak. It's just a less granular way of what Rolf is describing. My advice is just avoid merging in LabVIEW altogether.You'll live longer .
  22. I think you will need to scale back your expectations as to what you will be able to achieve for the software aspect. How much time do you have left? I'm guessing that you have spent most of your PHD time creating the hardware and now it's "just a little bit of software to make it work" Remote communications (especially for hardware) is not a trivial subject but you can get an idea of how you might tackle it (and the complexity) from XML-RPC Server for LabVIEW or Dispatcher in the Code Repository. There are also examples shipped with LabVIEW to show the basics of client/server communications. That is before we get to actually controlling the hardware. Personally. If you have experience of text based programming (did you write the firmware?) and are on Linux (which I suspect from the phrasing of the question) then I would use Python.
  23. Well. You you won't be able to use Google Maps without a lot more experience with web applications and JavaScript. This is the easy way, but you will have difficulty updating the map, online, programmatically from you application.. Web Rainfall.vi You will have to download and install Google Earth instead of using Google Maps and use the library previously linked (if it works) to add your KML to the Google Earth interface.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.