Jump to content

ShaunR

Members
  • Posts

    4,939
  • Joined

  • Days Won

    306

Posts posted by ShaunR

  1. Hi,

    I have some experience with image processing using NI IMAQdx and some standard cameras and line scans, but I don't have much knowledge about the camera interface. Now I would like to do some home projects which using NI IMAQdx but with lower budget camera with decent quality. I found some wireless camera on eBay but the quality is not really good (I think so). There is just a new Raspberry PI camera on the market (http://www.raspberrypi.org/product/camera-module/)

    I am collecting information on how to using this camera. I know that I have to use a Raspberry PI model B to connect to this camera but it only provides the features to monitor camera directly. Now I wonder how hard it is to make a IP camera with this Raspberry Pi with camera. I am not an expert in linux as well as C program but I am wiling to learn. Anyone can give me some idea what and where I need to look for. Or another camera option?

     

    Thanks!

     

    NI software doesn't run on Raspberry Pi but it's fairly easy to set up an IP camera with the motion service.

  2. That was one of the worst and longest lived bugs in LabVIEW that caused me no end to grief. I won't be able to upgrade for a bit, but this single issue alone makes me think it will be a great patch release as far as code maintenance goes.

     

    It's an interesting solution to a problem I have never seen. Whilst I would agree it's better than introducing arbitrary bugs, I just wonder how effective it will be when SCC is used as you may end up with it just as big a pain as the merge tool and phantom recompiles.

  3. What part was too harsh?  That if I were using Mac/Linux I would certainly welcome 64-bit versions (and that it clearly took most of the development time) or that I found very little difference between LV13 and LV14 on my Windoze machine.  I have stated elsewhere on multiple occasions that I think the 64-bit versions are the big story of LV14 yet the marketing so far has still been trumpeting new features like the icon.

     

    I assume those statements were under the beta forums?

     

    I was just saying that trivialising the support for 64 bit was a bit harsh just because it is of no use to you in windows where we have had it for yonks. If you have ever worked with large datasets (grandma egg sucking I expect), you will be aware of the importance of 64 bit and now NI have spent the considerable energy to port to the other 64 bit platforms which were the withered limbs of the LabVIEW platform support.

     

    Personally, nothing, IDE or feature-wise, has convinced me of moving from 2009 in all the versions let alone 2014 so It's no change there for me. Even though I produce tools in later versions, they are still developed in 2009 (64 bit, at that) and just packaged in later versions due to 3rd party support requirements. I think, however, a huge amount of kudos should go to NI for expanding the 64 bit platforms and making LV a truly x-platform solution even if the best thing to crow about under windows is a new icon.

  4. Likewise, if I was looking to move to the Mac or Linux platform, LV14 is a most welcome development.  As for a Windoze user considering an upgrade, I did not see much there to make it worth the pain.  

     

    I think that's a bit harsh. Mac and Linux 64 bit support is probably the biggest change in LV since 2009. I would agree with you (and have often stated similar) with releases from 2009-2013, and might even go so far as to say 2009-2013 it has only really been cosmetic changes but 64 bit platform support across the board is a phenomenal step forward that would have required a huge investment in time and skills.

  5. I should have entitled the original thread 'Decoupling LVOOP class based message systems that communicate across a network' or perhaps been even more generic and asked: 'How to best decouple a class from it's implementation?'

    As stated, I am not interested in changing the whole architecture and abandoning message classes.  For the most part, they work very well and make for efficient and clean code.  But every architecture has its issues.

    And serialization (as far as I understand it) really does not help anything because you still need the class on both ends to construct and execute the message.

    I did not intend to jump down anyone's throat but if you re-read your responses, they seem a bit pushy instead of being helpful for solving the problem.  I would prefer to focus on the OOP decoupling problem and solve that then to 'pull out all the nails and replace them with screws'.

     

    I think you read the words, but didn't understand the content. My writing style can seem adversarial sometimes but it is usually with the genuine intent to help or, in this case, stop you going off on a unicorn hunt.The point I'm trying to get across is you are chasing a unicorn and only because the solution must be LV classes. If you drop that requirement, it's just an ordinary horse and we have loads of them.

     

    A better heading would have been. "How do I reconstitute a LV object from a string sent over a network like I can in Python, Javascript or PHP". That would have got you the unanimous response of "You can't!" and is the reason why your "decoupling" search will ultimately be a dead end when using classes.

     

    Anyway. I said my bit. I've clarified and re-clarified and so its time for me to shut up with a final apology for coming across as "pushy".

  6. I knew it was a bad idea to give a specific example in a discussion like this.  Inevitably, someone would read way too much into it and make a ton of unfounded assumptions that just go off in a tangent away from the original topic of the thread.

     

    Not true. No assumptions, just pure string messages and an example of how it can work based on your own description (DB, XML, Queues, Events, Classes - it doesn't matter because it is decoupled). And I've already told you how to decouple the classes - serialise to strings - but you don't like that answer

     

    When you insist on only using a hammer; everything looks like a nail so don't jump down my throat because a screw bends when you try to hammer it into concrete and I explain why and easier alternatives using a screwdriver.

  7. My server loads a hierarchical set of data from a database and stores it in a class that is a composition of several classes that represent the various sub elements of the data's hierarchy.  When a client connects to the server, the server needs to send this data Y to the client so it can be formatted and displayed to the user.  So, both will need the ability to understand this Y data class.  And the client's BB class must accept this Y class as input (normally by having it be an element of the message class's private data).

    Now I suppose I could flatten the class on the server side and send it as a string using the generic CC class, then on the client side I could write the BB class to take a string data input so the CC class could pass the data to the child 'do' method in the BB class, but at that point I would have to unflatten the string to the Y data type so it could be used in the client.

     

    Hmmm. So let me get this right. You have taken an implementation agnostic storage mechanism (a DB), queried it and stuffed it into an implementation and language specific set of classes which you then wish to transmit to another set of language and implementation specific classes that may or may not be the same and now want to break the enforced cohesion due to using the aforesaid classes?.

     

    Why have you not just sent the SQL queries instead of faffing around with LV classes (a bit bit like this) and leave the client to figure out how it wants to display it (basically just as string indicators-use the DB to strip type) If you have a DB in the mix, then you can always craft SQL queries for the data and prepend instructions for functional operations. If you want to wrap all that formatting into message classes; fine. Like I said, encoding is tedious, but for decoding, the fact you cannot dynamically instantiate a class in LabVIEW and strict typing will ensure you cannot write a generic decoder (which is what AQs discombobulator thingy attempts to address).

     

    Using message classes for this type of thing is exactly what I was saying about straight jacketing and not using a class messaging system will be quicker smaller and easier to maintain and extend. In fact, in this instance, as you are using a DB you will find that you will be able to extend it sometimes without writing any LV code - just changing and expanding the string messages via a file or via the client as the server acts just as a message parser.

  8. I'll argue that you increase the number of VIs in exchange for code readability. I'd much rather see a well defined interface to a class being used versus an enum going into a case structure with a ton of cases. Add in a variant that needs casting, or a cluster-saurus where only some of the fields are valid based on the enum and classes are way easier to read. Easier to read = Easier to debug = easier to maintain. Number of VIs in the project isn't really a concern for me, and I don't see why it should be.

    Sure. Readability is good, IF there is no penalty. Unfortunately, this is the reason why LVOOP projects take hours to compile.

    But it doesn't necessarily improve readability. Most of the time it's just boiler-plate code with a trivial difference multiplied by the number of children. It is the equivalent in other languages of having a file for every function and you'd be shot if you did that.

    • Like 1
  9. And now, the downsides. Mainly, there is a lot of code duplication in this architecture. Many AE's look very similar, with actions like Set VISA Address popping up in every single device. Moreover, imagine having two identical Keithleys (a very real prospect)... one would have to duplicate each vi to create Keithley A and Keithley B, which seems silly indeed. Also, I don't particularly like the extensive use of Variants to limit the number of inputs on the AE.

     

    You don't need to duplicate the code. Just pass in the address to talk to Keithley A and Keithley B or, if it is already a "process", make it a Clone VI and launch an instance with the appropriate address..

     

    If you switch to Classes, the methods will still be all very similar, only you will be forced to make each one atomic. Classes are very similar to polymorphic VIs in how they are constructed so if you have a VI with a couple of operations in a case statement (Set VISA/Get VISA, say) then you will have to make each one a separate VI, not only for that Class, but any overrides too. This is why classes bloat code exponentially and there has to be very good reasons (IMO) to consider them in the first place let alone if you already have a very adequate working system..

    • Like 1
  10. My boss is trying to save money by creating installers for code reuse :(

     

    Installers are probably the worst option for toolkits/reusable code within a company. Sure if you are going to distribute to third parties it warrants the extra effort, but internally it just adds more overhead for pretty much everything (coding, testing and documentation) as well as causing all sorts of problems with version control. It has few, if any, benefits.

     

    A far superior solution (and generally free) is to use your favourite flavour of source code control for distribution from a central server. It does come with some caveats, but it is infinitely better and more flexible than installers.

  11. In both cases, the performance (judging by CPU utilisation) averaged about 10 % worse. It would appear from my very crude tests that DVR is faster.

     

    A surprising result, although I am suspicious of you equating CPU utilisation with throughput performnance

  12. Ok, I had to go back and check this because it didn't sound right. It turns out, even messaging the data around results in this buffer phenomenon. I guess I never saw it because I wasn't trying to push the code this hard. Hmmmm. In that case, it would seem that messaging the DVR to an FIO process does give some advantages.

     

     

    I was not aware of this function either (still using 2009 whenever I can ;) ),

     

    How big are your images?

     

    This is how I would approach it. It is the way I have always, with high speed acquisition and have never found a better way even with all the new fangled stuff. The hardware gets faster, but the software gets slower :D

     

    Once you have grabbed the data, immediately delete the DVR. The output of the Delete DVR primitive will give you the data and the sub process will be able to go on to acquire the next without waiting. The data from the Delete DVR you copy/wire into a Global Variable (ooooh, shock horror) which is your application buffer that your file and UI can just read when they need to. This is the old fashioned "Global Variable Data Pool" and is the most efficient method (in LabVIEW) of sharing data between multiple process and perfectly safe from race conditions AS LONG AS THERE IS ONLY ONE WRITER. You may need a small message (Acquired-I would suggest the error cluster as the contents) just to tell anyone that wants to know that new data has arrived (mainly for your file process. Your UI can just Poll the Global every N ms).

     

    The process here is that you only have one, deterministic, data copy that affects the acquisition (time to use those Preferred Execution Systems ; ) ) and you have the THE most efficient method of sharing the data (bar none) but - and this is a BIG but - your TDMS writing has to be faster than your acquisition otherwise you will lose frames in the file.You will never run out of memory,or get performance degradation because of buffers filling up, though, and you can mitigate data loss a bit by again buffering the data in a queue (the TDMS write side, not the acquisition)  if you know the consumer will eventually catch up or you want to save bigger chunks than are being acquired. However, if the real issue is that your producer is faster than your consumer; that is always a losing hand and if it's a choice between memory meltdown or losing frames, the latter wins every time unless you are prepared to throw hardware at it...

     

    I've used the above technique to stream data using TDMS at over 400MB/sec on a PXI rack without losses (I didn't get to use the latest PXI chassis at the time that could theoretically do more than 700MB/sec :( ).. The main software bottle-neck was event message flooding (next was memory throughput, but you have no control over that) and the only way you can mitigate it is by increasing the amount you acquire in one go (reduce the message rate) which looks much, much easier with this function.

    • Like 1
  13. If they are web developers, I'd show them LabVIEW 2013 Web Services. (before LabVIEW 2013 web services were tricky.) They should be able to create RESTful APIs for their front ends with LabVIEW. Graphical programming might even be quicker for them to prototype with. 

     

    I wouldn't. It's LabVIEWs weakest domain and there are far better tools out there that they will already be familiar with, which LabVIEW can't touch. Worse than that, though is that Webservices only run on Windows and I will guarantee 99% of their work is on Linux web servers using Apache and to a lesser extent,  Nginx. There is not a lot you can answer to "how do I integrate LabVIEW with our Apache server?".

     

    However. You can gloss over all that, just don't "demonstrate" web servers/ apps! Instead you can show them one of LabVIEWs strengths (such as Vision),and say "we can also make it available on the web" -TADA! (without going into how, too much ;) ).

  14. Have you tried just updating only the visible areas (I expect 200 columns is not all presentable on screen at one time).

    MJE demonstrated a Virtual MCL that mitigates the performance impact of cell formatting of large data-sets and the Table control has a similar API.

     

    I also understand that the performance of these controls was vastly improved in LV2013.

  15. I cannot agree with you there as you missed out the fact that the code needs to be maintainable and by maintainable I mean by any programmer of a suitable experience level, not just the person who wrote.

     

    I do not believe every node & wire should be labelled, but at least enought for somebody else to pick up the code and run with it.

     

    Maintainable code is not really quantifiable-it is a subjective assessment. All code is maintainable, it's just how much effort it requires. Even a re-factor (euphemism for a re-write) is a form of maintenance. Good coding practice and style can go along way towards making the life of a programmer easier but, the crux of the matter is that it can look as pretty as you like and you could have filled out every description and hint but if it doesn't work; you won;t get paid and you won't be asked to come back.

    .

    Therefore it cannot form the basis of a performance or coding metric for the purpose of quotation or deliverable. It's a bit like "future-proofing" in that sense. Additionally, only programmers care about neatness because they are the ones that will be required to maintain it. A project manager just wants it to work and it's your (my) job to make sure it does even if the wires are out by a pixel or two.

     

    So I like the grading scheme here because it will be a good indicator that they can write working code under time pressure (like the day before delivery :D)..

     

    programmer  [proh-gram-er] : noun

    1.a person who converts caffeine into computer programs.

    • Like 2
  16. Thanks for the insight Shaun. I do recall seeing Rolf's HTTP VIs, and this is what in fact lead me to believe I could just prepend the proxy, but probably I just misunderstood. I don't really know enough about this stuff so I guess it is time to learn something!

    Good news.Rolfs HTTP library does support Proxies! (without authentication).

    The parser doesn't include the HOST in the header, though, so you should add that (trivial change). Servers have tightened up their security in recent years and the Host field is mandatory on most servers nowadays...

  17. Is this correct?

     

    No.

    What you are describing is merely prepending a sub domain name. Whilst sometimes people put a proxy on a sub domain, it's not a requirement. Besides, you may need to authenticate with the proxy.

     

    Under normal conditions, the GET request URI is usually a relative path (doesn't have to be but is usually the case) and the HOST field of the HTTP header contains the domain of the target page. It's is slightly different with a Proxy.

     

    The GET URI is the full URI of the target page (including the http:// and the domain name of the target page). The HOST field is the domain name of the proxy server and you connect to the proxy server, not the server that has the page.

     

    A proxy may also require authentication and these parameters are also sent in the HTTP header fields (see section 2).

     

    I don't believe any of the LabVIEW web oriented VIs support Forwarding Proxies (the sort I think you are describing) out of the box. I may be wrong and they added them in later versions, but  I haven't come across any. You might try Rolfs HTTP VIs, I can't remember off-hand if they support proxies and the OpenG site is down ATM so can't check, Apart from that I expect you will have to hand craft these headers and handle the responses the old fashioned way (and you will be stuffed if it is SSL/TLS).

  18. I think perhaps the design, style and documentation should be worth more. I have a pretty high level of coding standard (i.e. block diagram neatness, comments etc) that I always try and aim for. This is ingrained in me after more than 15000 hours of doing LabVIEW. As such, it is very difficult to override this instinct and code "messily" just to try and get all the functionality in.

     

    Depends on your approach, I suppose. Or, more specifically, how much time you have. I haven't met a customer yet that would say yes to me billing for more time to make the diagrams look better or fill in all the VI descriptions/labels. I tend to throw stuff at the diagram, get it working, then make it look pretty. In fact, when faced with a particularly knarly problem, I will go around and fill in descriptions, labels and make icons as a distraction. It fits with an iterative development better as you can make it look better with each iteration, as long as it works. Often, as more features are added to diagrams, they need re-prettifying as the feature list increases so making it pretty off the bat, is a bit pointless.

     

    But here we are talking about an exam which is designed to be time stressed and given that the purpose is to certify coding competence, not the examinees graphic design skills or obsessive/compulsive tendencies; I think this emphasis of marking is more fitting. If you have time at the end of the exam to make it easier to read for the examiners, great, but if it's that bad they can press THE button. However, working code is a better yard-stick for coding competence and debugging capabilities in a time constrained environment, IMO (at least for a CLED), and that's what employers want. The Architect cert is probably where how pretty it looks is more relevant (more a test of communication than CLED), once you've proved you can write the code first..

     

    But what do I know! I've no certifications at all ;):D

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.