Jump to content

ShaunR

Members
  • Posts

    4,883
  • Joined

  • Days Won

    297

Everything posted by ShaunR

  1. Just has a cursory glance. But it looks like you are calculating the coefficients and passing the XY parms for the linear fit twice with the same data (it's only the weightings that change from the first "fit" to the second) . You could pre-calculate them in a separate loop and just pass them into the other loops. Also, you might benefit from passing through the x array (through the coefficient vi).
  2. A lot of stuff you need for the network "architecture" is probably contained in the Dispatcher in the CR. It can do the "clustering" by simply placing the dispatcher in the right place and pointing the publishers and subscribers to it (can be on the same machine or centralised and you can have multiple dispatchers spread out accross many machines). What you send and what you do with it is then up to your implementation. I wouldn't suggest UDP for this, however, unless you are going to write a robust protocol on top-which is a lot of work and mitigates a lot of the advantages.
  3. In my case it was one FPGA card (~$4K). However. That reminds me of the other trick. Ask for a loaner to test before you buy if you have never used the product before (they've always got a couple for demos and conferences kicking around). It concentrates a sales reps mind They tend to be focused on "potential" sales rather than "previous" sales, but they can pull the strings.
  4. I don't think there are that many. Guess I'll have to install them again and find out I would. There are a lot less "working" things in there already Alternatively. Start a new thread. Indeed. I have to be very careful about dependencies. Some clients insist on "approved vendors" or "no 3rd Party/open source" and, for most of the stuff in OpenG that I would use; I have my own versions that I've built up over the years. It's just easier not to use it than get bogged down in lengthy approval processes.
  5. Do they need to be utility VIs? We can detect control chars (they would break the lookup, I think, so need to be removed) and to escape the Flatten could just have a boolean. Not really sure what you have in mind though. If I remember correctly, as long as you keep the copywrite on the VI and. perhaps, the documentation, you can use, modify and do pretty much what you like with them (someone in the OpenG team could advise). It may be possible to rename (so they don't clash) just the variant stuff (there's only a couple) and include them in the package so it is then completely self contained with no dependancies Might I suggest you place it in the unconfirmed CR so that we can make a list of things that need to be done and to manage it?. We've been rather obnoxious in hijacking JZollers thread-my apologies JZoller! Yeah. There is a re-use library consisting of about 10 VIs for mundane stuff. That's about all I need from project to project . Everything else is self contained APIs
  6. Nearly. Flatten adds things like quotes and brackets. For conversion, these need to be removed. Whilst I dare say you could make it work that way, I wanted to leave most of your stuff as-is and "add" rather than change if at all possible. Put it in the CR and see how many downloads . It's not a case of liking. There's some great stuff in there. It's a case that not everyone can use OpenG stuff. It's also not really appropriate to expect someone to install a shedload of 3rd party stuff that isn't required just to use a small API (I had to install OpenG especially just to look at your code and uninstall it afterwards)
  7. Well. Here's my experience......... I was working on an FPGA and we wanted to transfer huge amounts of data from a 3rd party FPGA aquisition board, accross the PXI backplane, to an NI board for crunching. We couldn't use the NI streaming VIs since the technology is proprietary and NI wouldn't liaise with the 3rd party so they could implement it in their FPGA (which is fair enough). However. NI said that they could DMA at about 700MB/Sec in each direction (1.5 GB/sec) across the back-plane which was "good enough for our team". The only problem was that all examples never addressed this sort of throughput apart from mentioning that, under the right conditions, it was possible. So long story short. The local NI rep hooked me up with the UK FPGA guru. I sent through an example of what we wanted to do (with which I was getting about 70MB/Sec) and he sent through a modified version with comments about where and what was important in my example for getting the throughput. It could do it at 735MB/Sec (each direction). He also sent me through an internal (not for distribution) benchmark document of all the NI PXI controllers. what their capabilities where, what measured throughput's could be obtained, with what back-planes and which board positions within the rack (which is important). Saying all that, It did take me two weeks to get through to him. I had to go through the "correct channels" first before the NI rep had a good excuse to "escalate" the issue through the system. The key is really building up a contacts list of direct dial numbers to the right people. If you know what you are talking about, they will be happy to take your call as they know it's not a silly problem. NIs problem is that there are too many inexperience people calling support for trivial things and, unfortunately for us, their system has been setup so that the engineers are well buffered from this.
  8. Sweet. Only the boring parts to go then I made a slight change to your lookup by adding a "To String" in each of the classes to be overridden. This means that the polymorphic VIs become very simple (Not mention that I could just replace my lookup with yours, change terminals and, hey presto, all the polys I've already created, with icons, slot straight in ). I've added U8,U16, U32, U64, I8, I16, I32, I64, String, String Array, Double Array and Boolean. (I've back-saved it to 2009 so others can play although the Hi Res timer isn't available so the benchmark test wont work) Next on my list is to get rid of the OpenG stuff.
  9. Don't forget the support! Support for NI devices is second-to-none. It is this you are truly paying for.
  10. Not quite. The code knows nothing. It doesn't know what a glossary IS only that It is a field name it should look up for the programmer- it just gets what the programmer asks for. If the JSON structure changes, no changes to the API are needed. It doesn't care what the structure of the JSON object is, it's just an accessor to the fields within the JSON object - any JSON object. There is nothing stopping you doing this, but this isn't the responsibility of a parser. There is nothing to stop you creating an "object" output polymorphic case for your “experiment setup” (or indeed a whole bunch of them), you just need to tell it what fields it consists of and add the terminal. However. That polymorphic case will be fixed and specific to your application, and not reusable on other projects (as it is with direct conversion to variant clusters). What is more likely, however, is that your class accessors (Get) will just call one of the polymorphic VIs with the appropriate tag when you need to get the value out. I think you just need a better lookup and you'll be there! (with bells on) No need to go complicating it further by making the programmer write reams of application specific code just to get a value out for the sake of "objectness"
  11. This is the "problem" as I was outlining it earlier. You have now hard-coded the retrieval of the value based on the structure of the entire stream. The former is is preferable from a genericism point of view. The latter, I think, is inflexible (I use my infamous "->" by the way). Yup. Getting it in is OK. Like I said. Getting it out again in a generic way so that you don't "hard-code" it in is the tricky bit. I'll also have to take a look at Tons thingy since he is flattening to display. I can then use JZollers parser .
  12. You included icons Indeed. It is the getting the value back out that is the problem. Same as with variants/clusters. It's getting interesting now, however How about a slightly modified JSON of one of your examples? (Get the "NestArray" Values) {"T1":456.789 , "T2":"test2", "Nest":{"ZZ":123,"NestArray":[1.3,2.4,1.6] }} I don't think it is sufficient to simply have a look-up as you have here, but it is close.
  13. Hmmm. Not sure where I got that from. Certainly in the LabVIEW Timestamp Whitepaper I just found it shows it is indeed 128 bit so I;m obviously wrong. But I have recollections of it being 12 bytes as it was one of the improvements (adding a timestamp) to the Transport.lib (which after some research I made 12 bytes). Since then it's just stuck as one of those anomalies to my expectations since 12 is a bizarre number.
  14. Yup. Keep drinking the cool-ade It probably took me the same amount of time to write the concept as it did for you to read my posts...lol
  15. Well. A simple way (but I can think of better, more complicated ones- linked, list, variant lookups et al) is to make the column in the 2D array significant and use a hierarchical tag e.g. "first:second:third". But this is inefficient (for lookups - although probably not prohibitively so) and requires a much more complex parser than I'm prepared to write at the moment (which is where JZollers stuff comes in ). I'm hoping someone has a "slick" aproach they've used in the past that we could perhaps just drop in The intermediary format is really a secondary consideration apart from it needs to be easily searchable, structure agnostic and not make the parser overly complicated just for account for "type". A 2D array of strings is just very good for this particularly as the input is a string and requires string manipulation to extract the data (regex gurus apply here...lol) Don't forget my comments aren't trying to address the existing code or how it's coded,per se. It's a limitation I perceive with using clusters and variants (or more specifically, variant clusters) as the interfaces.
  16. Some things perhaps you didn't know about the timestamp that may shed some light. In 2009 it is also 1ms. The 14ms you are talking about is probably that you were using Windows XP where the timeslice was about 15 ms (windows 2000 was ~10ms). . LabVIEW timestamps are 12 byte (96 bit not 128). The upper 4 bytes are not used and are always zero.
  17. This is exactly what my example is (analogously - Classic LV to LVPOOP). The 2D intermediate string array is Child2 with each row being Child1 and "lookup.vi" is the accessor (Child3). The parent is the DVR. Only we don't need all the extra "bloat" that classes demand. I expect if you were to lay down an example, the internal vis that do all the the real work in your classes (there are only two) would look remarkably similar. If you want a class implementation, then you might be better off by looking at AQs. (I could have also represented the nesting aspect by making the column of the 2D significant. But I think there may be a better way.).
  18. OK. I thought I'd put some meat on my thoughts and try a proof of concept that people could play with, poke fingers at and demolish. I'm not in anyway trying to divert from the sterling work of Jzoller but I hope that perhaps some of the thoughts might light a bulb that can ease further development of his library. Don't get your hopes up that I will develop it further as JZollers library is the end-goal. Of course. It looks like crap, doesn't work properly (the parser is, how you say, "basic") and the "nesting" still needs to be addressed since it cannot cope with non-unique identifiers (I do have a solution, but would rather hear others first) So don't expect too much because, as I said, it only a "Proof of Concept".
  19. Whilst the mechanics may be thought of in that way and indeed, both may be coerced to emulate the properties of the other, they are actually different topologies. Queues are "many-to-one" and events are "one-to-many". ....but with no type! Agreed I've been bitten by these sorts of things in the past too.....they are a real bugger to track down and the solution is invariably to use a queue instead.
  20. Me too. I was hoping someone would come up with a generic solution before I had to, as my first encounter proved it was a "non-trivial" problem. These were my thoughts about Json and LabVIEW in general from the first skirmish........ The thing about using variants (the feature that never was ) and clusters is that it requires a detailed knowledge of the structure of the entire Json stream at design-time when reconstituting and getting back into LabVIEW (not an issue when converting "to" Json). We are back to the age-old problem of LabVIEW strict typing without run-time polymorphic variant conversion. To get around this so that it could be used in a run-time, on-the-fly sort of way, I eventually decided that maybe it was better to flatten the Json to key/value string pairs (here I go with my strings again...lol) that could then be used as a look-up table. Although this still requires prior knowledge of the value type if you then convert a value to, say, a double - it doesn't require the whole Json structure to be known in advance and instead converts it to a sort of intermediate ini file which simplifies the parser (don't need to account for every labview type in the parser). In this form, it is easier to digest in LabVIEW with a simple tag look-up which can be wrapped in a polymorphic VI if "adapt-to-type" is required. It also means, but perhaps a bit out of scope for consideration, that you can just swap the parser out to another (e.g. XML).
  21. Forgot all about your little project until I read Reading Json Just had another look (no errors now). I know it's in it's infancy, but I wasn't really sure how to use it if I wasn't using classes. Is the intention to only support classes?
  22. I don't use refs at all The main reason I use named queues is that it is part of the message "routing". This is why queues are (IMHO) preferable under these particular circumstances since you don't miss messages and race conditions are alleviated (although not necessarily eliminated depending on your macro view). With queues you also have asynchronicity but you can also guarantee the order which, for control, is highly desirable.. Or you could begin sub-paneling (not quite sure if I'm understanding what this is correctly), send the shutdown and the sub-panel would appear then disappear as each element was removed (or you could flush it on the off chance it hasn't been enacted yet). I just think for this particular scenario, queues have many advantages over events, not least that you don't "lose" messages just because someone isn't listening.
  23. Named queues But in a less confrontational aspect. The issue I have is with the idea of "stuffing". That you create a registration, maintain it and generate events, whilst not consuming. At least with a queue you can monitor the elements and even restrict the size (for the scenarios where you want to "stuff")..
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.