Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Not true. No assumptions, just pure string messages and an example of how it can work based on your own description (DB, XML, Queues, Events, Classes - it doesn't matter because it is decoupled). And I've already told you how to decouple the classes - serialise to strings - but you don't like that answer When you insist on only using a hammer; everything looks like a nail so don't jump down my throat because a screw bends when you try to hammer it into concrete and I explain why and easier alternatives using a screwdriver.
  2. He'd be having kittens with this then. Call_Back.zip
  3. Hmmm. So let me get this right. You have taken an implementation agnostic storage mechanism (a DB), queried it and stuffed it into an implementation and language specific set of classes which you then wish to transmit to another set of language and implementation specific classes that may or may not be the same and now want to break the enforced cohesion due to using the aforesaid classes?. Why have you not just sent the SQL queries instead of faffing around with LV classes (a bit bit like this) and leave the client to figure out how it wants to display it (basically just as string indicators-use the DB to strip type) If you have a DB in the mix, then you can always craft SQL queries for the data and prepend instructions for functional operations. If you want to wrap all that formatting into message classes; fine. Like I said, encoding is tedious, but for decoding, the fact you cannot dynamically instantiate a class in LabVIEW and strict typing will ensure you cannot write a generic decoder (which is what AQs discombobulator thingy attempts to address). Using message classes for this type of thing is exactly what I was saying about straight jacketing and not using a class messaging system will be quicker smaller and easier to maintain and extend. In fact, in this instance, as you are using a DB you will find that you will be able to extend it sometimes without writing any LV code - just changing and expanding the string messages via a file or via the client as the server acts just as a message parser.
  4. Use strings (the ultimate variant) and you won't have any problems either crossing the network boundary or language implementation. Don't straight jacket yourself with LabVIEW classes which are tedious for encoding and completely rubbish for decoding.
  5. Sure. Readability is good, IF there is no penalty. Unfortunately, this is the reason why LVOOP projects take hours to compile. But it doesn't necessarily improve readability. Most of the time it's just boiler-plate code with a trivial difference multiplied by the number of children. It is the equivalent in other languages of having a file for every function and you'd be shot if you did that.
  6. You don't need to duplicate the code. Just pass in the address to talk to Keithley A and Keithley B or, if it is already a "process", make it a Clone VI and launch an instance with the appropriate address.. If you switch to Classes, the methods will still be all very similar, only you will be forced to make each one atomic. Classes are very similar to polymorphic VIs in how they are constructed so if you have a VI with a couple of operations in a case statement (Set VISA/Get VISA, say) then you will have to make each one a separate VI, not only for that Class, but any overrides too. This is why classes bloat code exponentially and there has to be very good reasons (IMO) to consider them in the first place let alone if you already have a very adequate working system..
  7. Installers are probably the worst option for toolkits/reusable code within a company. Sure if you are going to distribute to third parties it warrants the extra effort, but internally it just adds more overhead for pretty much everything (coding, testing and documentation) as well as causing all sorts of problems with version control. It has few, if any, benefits. A far superior solution (and generally free) is to use your favourite flavour of source code control for distribution from a central server. It does come with some caveats, but it is infinitely better and more flexible than installers.
  8. A surprising result, although I am suspicious of you equating CPU utilisation with throughput performnance
  9. I was not aware of this function either (still using 2009 whenever I can ), How big are your images? This is how I would approach it. It is the way I have always, with high speed acquisition and have never found a better way even with all the new fangled stuff. The hardware gets faster, but the software gets slower Once you have grabbed the data, immediately delete the DVR. The output of the Delete DVR primitive will give you the data and the sub process will be able to go on to acquire the next without waiting. The data from the Delete DVR you copy/wire into a Global Variable (ooooh, shock horror) which is your application buffer that your file and UI can just read when they need to. This is the old fashioned "Global Variable Data Pool" and is the most efficient method (in LabVIEW) of sharing data between multiple process and perfectly safe from race conditions AS LONG AS THERE IS ONLY ONE WRITER. You may need a small message (Acquired-I would suggest the error cluster as the contents) just to tell anyone that wants to know that new data has arrived (mainly for your file process. Your UI can just Poll the Global every N ms). The process here is that you only have one, deterministic, data copy that affects the acquisition (time to use those Preferred Execution Systems ; ) ) and you have the THE most efficient method of sharing the data (bar none) but - and this is a BIG but - your TDMS writing has to be faster than your acquisition otherwise you will lose frames in the file.You will never run out of memory,or get performance degradation because of buffers filling up, though, and you can mitigate data loss a bit by again buffering the data in a queue (the TDMS write side, not the acquisition) if you know the consumer will eventually catch up or you want to save bigger chunks than are being acquired. However, if the real issue is that your producer is faster than your consumer; that is always a losing hand and if it's a choice between memory meltdown or losing frames, the latter wins every time unless you are prepared to throw hardware at it... I've used the above technique to stream data using TDMS at over 400MB/sec on a PXI rack without losses (I didn't get to use the latest PXI chassis at the time that could theoretically do more than 700MB/sec ).. The main software bottle-neck was event message flooding (next was memory throughput, but you have no control over that) and the only way you can mitigate it is by increasing the amount you acquire in one go (reduce the message rate) which looks much, much easier with this function.
  10. I wouldn't. It's LabVIEWs weakest domain and there are far better tools out there that they will already be familiar with, which LabVIEW can't touch. Worse than that, though is that Webservices only run on Windows and I will guarantee 99% of their work is on Linux web servers using Apache and to a lesser extent, Nginx. There is not a lot you can answer to "how do I integrate LabVIEW with our Apache server?". However. You can gloss over all that, just don't "demonstrate" web servers/ apps! Instead you can show them one of LabVIEWs strengths (such as Vision),and say "we can also make it available on the web" -TADA! (without going into how, too much ).
  11. Have you tried just updating only the visible areas (I expect 200 columns is not all presentable on screen at one time). MJE demonstrated a Virtual MCL that mitigates the performance impact of cell formatting of large data-sets and the Table control has a similar API. I also understand that the performance of these controls was vastly improved in LV2013.
  12. Maintainable code is not really quantifiable-it is a subjective assessment. All code is maintainable, it's just how much effort it requires. Even a re-factor (euphemism for a re-write) is a form of maintenance. Good coding practice and style can go along way towards making the life of a programmer easier but, the crux of the matter is that it can look as pretty as you like and you could have filled out every description and hint but if it doesn't work; you won;t get paid and you won't be asked to come back. . Therefore it cannot form the basis of a performance or coding metric for the purpose of quotation or deliverable. It's a bit like "future-proofing" in that sense. Additionally, only programmers care about neatness because they are the ones that will be required to maintain it. A project manager just wants it to work and it's your (my) job to make sure it does even if the wires are out by a pixel or two. So I like the grading scheme here because it will be a good indicator that they can write working code under time pressure (like the day before delivery ).. programmer [proh-gram-er] : noun 1.a person who converts caffeine into computer programs.
  13. Good news.Rolfs HTTP library does support Proxies! (without authentication). The parser doesn't include the HOST in the header, though, so you should add that (trivial change). Servers have tightened up their security in recent years and the Host field is mandatory on most servers nowadays...
  14. No. What you are describing is merely prepending a sub domain name. Whilst sometimes people put a proxy on a sub domain, it's not a requirement. Besides, you may need to authenticate with the proxy. Under normal conditions, the GET request URI is usually a relative path (doesn't have to be but is usually the case) and the HOST field of the HTTP header contains the domain of the target page. It's is slightly different with a Proxy. The GET URI is the full URI of the target page (including the http:// and the domain name of the target page). The HOST field is the domain name of the proxy server and you connect to the proxy server, not the server that has the page. A proxy may also require authentication and these parameters are also sent in the HTTP header fields (see section 2). I don't believe any of the LabVIEW web oriented VIs support Forwarding Proxies (the sort I think you are describing) out of the box. I may be wrong and they added them in later versions, but I haven't come across any. You might try Rolfs HTTP VIs, I can't remember off-hand if they support proxies and the OpenG site is down ATM so can't check, Apart from that I expect you will have to hand craft these headers and handle the responses the old fashioned way (and you will be stuffed if it is SSL/TLS).
  15. All of them want working code, on-time and on budget. Simples.
  16. Depends on your approach, I suppose. Or, more specifically, how much time you have. I haven't met a customer yet that would say yes to me billing for more time to make the diagrams look better or fill in all the VI descriptions/labels. I tend to throw stuff at the diagram, get it working, then make it look pretty. In fact, when faced with a particularly knarly problem, I will go around and fill in descriptions, labels and make icons as a distraction. It fits with an iterative development better as you can make it look better with each iteration, as long as it works. Often, as more features are added to diagrams, they need re-prettifying as the feature list increases so making it pretty off the bat, is a bit pointless. But here we are talking about an exam which is designed to be time stressed and given that the purpose is to certify coding competence, not the examinees graphic design skills or obsessive/compulsive tendencies; I think this emphasis of marking is more fitting. If you have time at the end of the exam to make it easier to read for the examiners, great, but if it's that bad they can press THE button. However, working code is a better yard-stick for coding competence and debugging capabilities in a time constrained environment, IMO (at least for a CLED), and that's what employers want. The Architect cert is probably where how pretty it looks is more relevant (more a test of communication than CLED), once you've proved you can write the code first.. But what do I know! I've no certifications at all
  17. Depends what you mean by "easy". Write a pascal script to check in the registry for the Run-Time and if it doesn't exist invoke the run-time installer. If you have the download plugin, you can even get it to go and fetch the installer from NI, then run it.
  18. A much better grading scheme IMO. Employers pay you for functioning code, not whether you fill out labels or descriptions. It's only really a necessity for toolkit developers.
  19. ......and you stopped showing the top 10 like count on the main page the day after I got to #2
  20. The issue isn't so much accessing SQLite on network drives; I's concurrency. I can quite happily read and and write to my NAS boxes, but woe betide if you try and share the DB amongst clients. Just for giggles, I ran the Speed example and pointed it to my NAS box (over a wifi connection) and it achieved about 0.5 secs to insert and read 10,000 records. .
  21. There is the possibility that data won't be written to the DB when using PRAGMA SYNCHRONOUS=OFF,on a network share but if that is acceptable then, you should also set the PRAGMA journal_mode = PERSIST The default is DELETE and this severely hinders performance on network drives and increases the possibility of collisions and locking errors.
  22. Then why not have the server write to the CSV file (or whatever) then just import it to a local database for use? It'll be a one hit performance to retrieve then full SQLite performance whilst in use.
  23. This is what the SQLite peeps have to say. They are really talking about a single client, however and you will find it impossible (I think-never tried) to create a shared locking mechanism between two machines. You might as well use a proper client/server DB. There is a compile time option where you can choose the locking mechanism (file based may work better), but you would have to compile it yourself.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.