Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. Out of curiosity what are you using this for?

    I've recently been looking at zookeeper/etcd/consul which aren't hash tables but just distributed key-value stores used for service discovery and 'global' configuration. I need a way to locate which one of many distributed devices is producing which data, and I thought one of those might be a good fit for the job. In this case, both libraries have a http interface so the 'interface with lv' part is easy. But I am curious how your dht use case compares with this service discovery use case.

     

  2. On 11/2/2016 at 6:44 AM, Yagnik_Patel said:

    Hello everyone, 

    I am designing a project in which I need to variant data from cluster to tree view and then transfer name, value and tag from tree to table with drag and drop option as well as select individual cell option. I am attaching my example VI. But it needs OpenG stuff to execute so kindly take notice of it. Can anybody help me because i am not good enough with OpenG VI ?

    Thanks in advance 

    Cluster to Tree and Tree to Table.zip

    Just based on your description, it sounds like you should start with this:

    it takes a variant and there is a function in there which converts the variant into a tree. There is a class in there (VariantTree__VP) which has a method for setting what tree the class is pointing to (property node) and then a function BrowseVariant which populates the tree with the data in the variant.

    Once in tree form, its just a bunch of strings and should be easy to access in whatever way you want. This includes the built-in labview functionality for drag and drop between trees and tables. If you need more custom behavior, you can implement the drag and drop events as described here: http://zone.ni.com/reference/en-XX/help/371361J-01/lvhowto/drag_and_drop_programmatic/

  3. 7 hours ago, PiDi said:

    I think I'll just do what LabVIEW currently does - I'll let you set it and it'll break your VI :D

    Easy - there is VI property "Inline Is Allowed", so if it will return false, I'll just disable Inline option.

    The nice thing to do would be try to make each change and if the VI breaks (and wasn't broken before) you can undo it.

    The other thing is that there are really only a few possible combinations, so instead of replicating the horrible VI props menu you could make a list of possible combos...i did something like that here https://decibel.ni.com/content/docs/DOC-43737
    Its in the execute function (I wouldn't look at the menu maker function, its pretty gross)

  4. Isn't the radio button approach still static? I'd think you would at least use a listbox so you could easily add tabs. 

    For our configuration editor we used a tree control and subpanel (http://www.ni.com/example/51881/en/) with a sample implementation shown on this page (https://decibel.ni.com/content/docs/DOC-47483 , 3rd image).

    Might be a bit complicated for simple editors, but to my mind thats what the tab control is for.

  5. Quote

    Be careful if you use the NI function to get your binary data back, as it has bug in that will truncate the string at the first zero, even though that zero is properly escaped as \u0000.  Png files might or might not have zeros in them, but other binary things do (flattened LVOOP objects, for example).

    Ugh, this is really killer. I had issues upon issues with transferring images, and then right when I thought I had a solution this hit me. Meh..

     

    The real reason I'm posting is just to bump and see how jsontext is coming. It looks like you're still pretty actively working on it on bitbucket...do you feel more confident about it, or would you still call it "VERY untested"? I'd love to try it out for real when you get closer to, shall we say a 'beta' release?

    Also, from what I could tell there isn't a license file in the code. Are you planning on licensing it any differently from your other libraries, or you just never got around to putting in a file?

  6. 7 hours ago, Christian Butcher said:

    If I create a front panel containing a tab control, and each tab has some controls or indicators on it (or a subpanel, perhaps...), what is the runtime cost associated with the inactive tabs?

    If I had to guess, it still has to do all the work with copying data into the control (so you're still inserting into a rolling buffer for graphs, still copying big arrays for charts) but you save time by not having it redraw. For big data like charts, you can check the state of the tab control before writing and I have seen that help in big code.

    4 hours ago, Neil Pate said:

    Not directly answering your question but I would say the biggest favour you can do yourself is not use tabs, subpanels are your friend here.,

    I dont know if you are looking at some specific thing he said elsewhere, but in general I disagree. If you know what you want to put on the screen, there is nothing wrong with tabs and subpanels overcomplicate the situation because you've gone from 1 UI loop to N.

  7. 22 hours ago, Christian Butcher said:

    If I want to do similar things for TDMS logged data, am I limited to holding a complete data set within LabVIEW's memory and passing only chosen data points to the graph when I want to update, or using fixed decimation regardless of e.g. graph zoom level or focus? I assume that reading from a TDMS file that is currently open for writing is likely to end badly for me...

    There is an example finder example of reading and writing from a single file. I think its called 'concurrent tdms' or something. 2015 or 2016 also added in-memory TDMS which can then be flattened to a file at the end, so that might work for you too.

    As for memory usage, keep in mind memory isn't expensive these days, and an analog value can most of the time be stored as a 4-byte sgl without losing useful data *. (I don't have verification of this anywhere, but at one point a coworker went through every c-series module and concluded that there are no modules for which you need to use a dbl float -- ie replace "useful data" with "any data" above). If possible, it is always easiest to just keep all your data in memory. The biggest challenge is that labview somehow still isn't entirely 64-bit, but if you're not using rt/fpga I think you can get away with it these days...gigs and gigs of data to work with.

  8. TDMS files sound like a reasonable choice for that kind of data, especially at low rates, but so does sqlite.

    Seems like the pros of TDMS are you can hand it off to someone and they can open it in excel, but for sqlite you need to make a reader. TDMS also is a bit more flexible about schema. While you wouldn't want to change channel counts within a file for example, you can change between files without much work. For a database you'd need to come up with a schema that makes sense for you (this doesn't mean its complicated, it probably just means you have the columns "time", "channel number" and "value"). Sqlite lets you query easily for different conditions, while for tdms you'd have to write all the logic yourself for queries. Neither of the cons is particularly bad for the situation you've described.

    Your last point ( write TDMS files, then store records on which files hold what data ) is basically what diadem/datafinder do for tdms, as I understand them. So depending on your particular work environment, you may already have access to those tools which may save you some time.

    • Like 1
  9. the openg libraries are 'compatible with labview' which meets these standards: https://decibel.ni.com/content/docs/DOC-8981 but otherwise, no clue. Definitely a weird requirement.

     

    Depending on the specific requirements, you might be able to focus in the restriction by only using code which uses native labview functions (ie no CLFN/dll calls). Then you can figure out what types of malicious things labview could do -- for example, file IO functions are probably out, as would be any code which uses VI server (could call other code dynamically). These are both pretty easy to verify. From the product page this leaves you:

    -Array manipulation 
    -String manipulation 
    -Application control 
    -File handling 
    -Zip files 

    -Timing tools 
    -MD5 digest implementation 
    -Error handling 
    -Variant and flattened data manipulation 

    Still a pretty good set of tools and I can't think of any way this could affect other machines in a malicious way. Of course if the definition is 'you must inspect every function'...well, have fun :P

     

    • Like 1
  10. 2 hours ago, rharmon@sandia.gov said:

    Thanks infinitenothing for the input... Here is some more info...

    1. Files are copied to a RAM drive on the top level computer. I know that SSD's are about as good today, but when I first set it up the RAM drive was the best.

    2. I currently use 10 reenterant vi's running in parallel and the files get transferred as quickly as the top level can store them. Haven't run into trouble with this option yet. but I want to do other things on the top level vi while the transfer is going on.

    3. The files are contained in a single folder on the computer and the entire folder is moved to the top level computer.

    4. Database is a possibility, I would still need to look into.

    Still trying to determine what the best options are today because I'm re-writing the main top level vi

    First, as mentioned I'd use an async call per target. You can always parallelize the calls internally later at least launching per target doesn't mean you get hung up if one is slow. If you search example finder for 'async' I think there is a good example that either is, or used to be called benchmarking asynchronous calls. It talks to a set of web servers over http and demonstrates the performance advantages and disadvantages of each. keep in mind while looking at the code that fetching google.com is a different profile than fetching 100 100 mb files, but the example is still good.

    Then you need to decide on your transfer mechanism. I think you can mount network share drives and have windows copy files for you, but I'm not 100% sure and I've no idea about performance. The other good APIs built into labview are FTP, HTTP, and webdav. For HTTP, I've used apache, and for ftp I've used filezilla. I've never set up a webdav server but its basically http and appears to be built into windows. Each protocol has its ups and downs...

    HTTP(s): High overhead, not probably a big deal with a dedicated server, a closed network, and large files. The biggest issue is the sequential nature (request-response), meaning you need to create multiple connections and request in parallel just like your web browser. This is where the parallel for loop can come in handy. Note that each Handle in the API has a mutex of some kind so you have to create N handles, rather than calling N parallel requests on 1 handle. Another issue which you may or may not hit is that HTTP uses dll calls, and each dll call blocks the thread you're running in. If you have too many outstanding requests, suddenly your application locks up until one of them completes.
    FTP: The functions are old and you probably don't want to look inside but they work and are pretty quick. Has similar issues to HTTP in that theres a good amount of overhead for every file. The API, if I remember correctly, has a function called get multiple files which literally just fetches the files one by one in sequence....so you'll have to parallelize this for good performance too. Just uses TCP calls under the hood, so you don't have the dll lock-up issue.
    Webdav: The base functions use DLL calls but you can avoid that issue with the async api, where you just register for events on a set of request. When the file transfer completes, the event fires and you handle it. This is pretty fast and you don't have to do much besides tell it what to download, and it'll do it for you. Not sure how performance compares to FTP, but the individual low-level calls are about on par, slightly slower than FTP in my tests.

  11. If time isn't important, the easiest thing would probably to break it into two parts. First find and replace, then do the rest of your processing. However it doesn't sound like you have a really big file (maybe a few 10s of MB) so maybe if you can take a screenshot of your core loop we could provide some more specific feedback on how to avoid data copies.

    If you can't, then drjd's recommendation is the right one with two additions:
    -If it wasn't clear from his post, a lot of the string copies don't actually produce output strings if you don't wire them. So for something like match pattern (I think thats one of the ones where it works) you can say "look through my 10 MB string for '|'" and it won't actually allocate two new X MB strings -- it will just tell you "I found it at index N".
    -If the mystery string is in a file already, you can read it in line by line (if it has lines) or chunk by chunk (it looks like each chunk is a fixed size, but even if it isn't you can still do this you just have to be sure to use the leftovers).

     

  12. The only real answer is "the reverse of sending" but the data has to something reasonable for python to parse. If you are flattening the data in labview to binary rather than a more standard interchange format (I didn't look at the code) you should make sure you understand how labview stores data in memory. Also be careful that the flatten to string defaults to big endian and to prepending lengths to everything. 

    Might be easiest to look at this example:
    https://decibel.ni.com/content/docs/DOC-47034
    or https://github.com/ni/python_labview_automation

    and this may be useful too:
    https://decibel.ni.com/content/docs/DOC-46761

  13. 10 hours ago, ShaunR said:

    Pharlap is a walk in the park ;) VxWorks was the one that made me age 100 years :D I actually have some source with the relevant changes but never had a device.

    Yeah I've tried to compile things for vxworks. Even simple things suck. I know pharlap is just crappy windows 95 but I'd still rather not edit the source to get it working. I don't trust myself to maintain a working build process.

    8 hours ago, drjdpowell said:

    Be careful if you use the NI function to get your binary data back, as it has bug in that will truncate the string at the first zero, even though that zero is properly escaped as \u0000.  Png files might or might not have zeros in them, but other binary things do (flattened LVOOP objects, for example).

    Oh meh. How irritating. I don't think the pngs do but its worth checking. Thats the part of the system I haven't really gotten around to testing properly yet :/

  14. 6 hours ago, ShaunR said:

    I don't use any of them for this sort of thing. They introduced the JSON extension as a build option in SQLite so it just goes straight in (raw) to an SQLite database column and you can query the entries with SQL just as if it was a table. It's a far superior option (IMO) to anything in LabVIEW for retrieving including the native one. I did write a quick JSON exporter in my API to create JSON from a query as the corollary (along the lines of the existing export to CSV) but since no-one is investing in the deveopment anymore, I'm pretty "meh" about adding new features even though I have a truck-load of prototypes.

    (And yes. I figuratively wanted to kiss Ton when he wrote the pretty print :D )

    I'm stuck with plain files until NI moves PXI over Linux-RT (I haven't heard any official confirmation this will happen, I'm just assuming they didn't decide to upgrade the entire cRIO line while leaving their high performance automated test hw on a 10 year old OS). It sounds like pharlap doesn't support sqlite.

    2 hours ago, drjdpowell said:

    I've had a client hand me a 4GB JSON array, so I'm OK for large test cases. :D

    Ah so thats what you mean by large ;). Nothing like that on my end, but it occurs to me one of the things I'm doing (in the category of 'stuff i might just flatten to string' is streaming images from a server to a client. Basically I flatten the image separately to a png and then put that in a json object with some metadata (time, format, etc...). My point here is that as part of the json generation step for me, I'm passing in a large binary string which has to be escaped and handled by the flatten to json function. I realize this is probably a bit unusual but I thought I'd mention it.

  15. 22 hours ago, drjdpowell said:

    I’m working on a new JSON library that I hope will be much faster for my use cases.  It skips any intermediate representation (like the LVOOP objects in the LAVA JSON API, or the Variants in some other JSON toolkits) and works directly on JSON text strings.  I’d like to avoid making a tool just for myself, so I’d like to know what other people are using JSON for.  Is anyone using JSON for large(ish) data?  Application Config files?  Communication with remote non-LabVIEW programs?  Databases?

    I usually use it for config files or for any debug information (web service output, syslog messages, etc) which might be read by a human. I'm not sure what quantity makes the data 'large' but it could certainly be a few pages of data if you have arrays. For right this moment, I'm also using it for TCP messages but I may swap that over to flattened strings -- even if theres no real reason, as a rule I try to avoid using lv proprietary formats. For the cfg use performance isn't a huge deal but for everything else I feel like the lava api is too slow for me to run to it for everything. This may be unfair, but in general for those uses I'll pull out the built-in flatten to json.

    One thing I can say for sure is I've never needed the in-memory key-value features of the lava API. I just use the json stuff as an interchange, so all those objects only ever go in one function. The other issue I've had with it is deploying to RT...labview doesn't like some objects on RT, and the lava API fits in that category. Unsure why but it caused a lot of headaches a few months back when I tried to use it -- ended up just reverting.

    Associated with my main usage, the things I'd love to see are
    1-Handle enums and timestamps and similar common types without being whiny about how its not in the standard like the built-in API is.
    --->This is just because I generally do a quick flatten/unflatten for the cfg files, syslog, and tcp messages. Using the lv api you have to manually convert every offending element, which soaks up any speed boost you get from using the built-in one.
    2-Discover and read optional components (technically possible to read optional with lv api, but pretty wasteful and also gross. Unless there is magic I don't know, there is no way to discover with the built-in api). 
    --->Again on the cfg side, being able to pull a substring out as a 'raw json object' or something and pass that off to a plugin would let you nicely format things that might change. On the generation side, letting the plugin return a plain json object and appending that into the tree is handy too. For the higher-speed code I guess I don't really need this.
    3-I love the lava api's pretty-print.
    --->Its just handy for debugging and, for the cfg files, its nice to be able to easily read it. Not important for the TCP/syslog use cases. (It occurs to me it would be easy to use the lava api for this too, since for config files the slower speed doesn't matter so much).

  16. 3 hours ago, drjdpowell said:

    Anybody used Postgresql?  

    I was under the impression that the main advantage of postgres was if you were willing to write modules for it to do fast custom processing on the database side (ie move the code to the data). If you just want a standard sql database I got the impression postgres was only ok.

  17. mysql with the tcp connector (https://decibel.ni.com/content/docs/DOC-10453) so the cRIOs can talk to it for central data storage. For some queries (historical data) the db connectivity toolkit is faster, but mysql is slow as a historical database server anyway so I probably won't use it for that in the future -- it took a lot of tweaking and a lot of ram to get it to work at all.

    I may end up using your sqlite library for configuration data on my next project but I haven't gotten around to checking that it supports all the OSs I need (definitely pharlap, maybe vxworks).

  18. 18 hours ago, rolfk said:

    Can you elaborate more about what you mean with sysconfig? For me that is a configuration file under /etc/sysconfig on most *nix systems not a DLL or shared library I would know off. From the name of it, if it is a DLL, it would probably be a highly UI centric interface meant to be used from a configuration utility, which could explain the synchronous nature of the API, since users are generally not able to handle umptien configuration tasks in parallel. :D But then I wonder what in such an API could be necessary to be called continously in normal operation and not just during reconfiguration of some hard- or software components.

    Oh...no. I mean NI System Configuration API, for managing software etc. on RT targets and in theory doing other stuff with hardware but people who really use it for this totally get 100 pts extra credit. Very creative name obviously, and the shortened form is nisyscfg: http://zone.ni.com/reference/en-XX/help/373107E-01/nisyscfg/software_subpalette/

    Basically everything is synchronous and everything totally ignores its timeout. To quote the help:

    connect timeout in ms specifies the time in milliseconds that the VI waits before the operation times out. The default is 4000 ms (4 s). In some cases, this operation may take longer to complete.

    *by 'some cases' they mean 'pretty much all cases' and by 'longer' they mean 'go fetch a snack'.

     

  19. 17 hours ago, demess said:

    I am working on implementing Labview to a Data Distribution Service (DDS) API  which will be later used in conjunction with TestStand. I have a few  dll files which was written in c# and managed c++.  When I tried calling the dll files in LabView, the methods seemed to be missing.

    On the other hand I wrote a simple c# code and I am able to call the dlls with the respective methods. I believe it is the third party unmanaged c++ which is causing the problems. Do I need a dll wrapper for this to work?

    Just to be sure, are you aware of the existing labview api for dds http://sine.ni.com/nips/cds/view/p/lang/en/nid/211817

    It sounds like you're trying to implement something custom so I'm guessing it won't work, but it only takes a moment to double check :) 

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.