Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. Out of curiosity what are you using this for? I've recently been looking at zookeeper/etcd/consul which aren't hash tables but just distributed key-value stores used for service discovery and 'global' configuration. I need a way to locate which one of many distributed devices is producing which data, and I thought one of those might be a good fit for the job. In this case, both libraries have a http interface so the 'interface with lv' part is easy. But I am curious how your dht use case compares with this service discovery use case.
  2. Just based on your description, it sounds like you should start with this: it takes a variant and there is a function in there which converts the variant into a tree. There is a class in there (VariantTree__VP) which has a method for setting what tree the class is pointing to (property node) and then a function BrowseVariant which populates the tree with the data in the variant. Once in tree form, its just a bunch of strings and should be easy to access in whatever way you want. This includes the built-in labview functionality for drag and drop between trees and tables. If you need more custom behavior, you can implement the drag and drop events as described here: http://zone.ni.com/reference/en-XX/help/371361J-01/lvhowto/drag_and_drop_programmatic/
  3. The nice thing to do would be try to make each change and if the VI breaks (and wasn't broken before) you can undo it. The other thing is that there are really only a few possible combinations, so instead of replicating the horrible VI props menu you could make a list of possible combos...i did something like that here https://decibel.ni.com/content/docs/DOC-43737 Its in the execute function (I wouldn't look at the menu maker function, its pretty gross)
  4. Isn't the radio button approach still static? I'd think you would at least use a listbox so you could easily add tabs. For our configuration editor we used a tree control and subpanel (http://www.ni.com/example/51881/en/) with a sample implementation shown on this page (https://decibel.ni.com/content/docs/DOC-47483 , 3rd image). Might be a bit complicated for simple editors, but to my mind thats what the tab control is for.
  5. Ugh, this is really killer. I had issues upon issues with transferring images, and then right when I thought I had a solution this hit me. Meh.. The real reason I'm posting is just to bump and see how jsontext is coming. It looks like you're still pretty actively working on it on bitbucket...do you feel more confident about it, or would you still call it "VERY untested"? I'd love to try it out for real when you get closer to, shall we say a 'beta' release? Also, from what I could tell there isn't a license file in the code. Are you planning on licensing it any differently from your other libraries, or you just never got around to putting in a file?
  6. Yeah, they're very nice controls. Using them for current project, and they really de-labview the user interface
  7. If I had to guess, it still has to do all the work with copying data into the control (so you're still inserting into a rolling buffer for graphs, still copying big arrays for charts) but you save time by not having it redraw. For big data like charts, you can check the state of the tab control before writing and I have seen that help in big code. I dont know if you are looking at some specific thing he said elsewhere, but in general I disagree. If you know what you want to put on the screen, there is nothing wrong with tabs and subpanels overcomplicate the situation because you've gone from 1 UI loop to N.
  8. There is an example finder example of reading and writing from a single file. I think its called 'concurrent tdms' or something. 2015 or 2016 also added in-memory TDMS which can then be flattened to a file at the end, so that might work for you too. As for memory usage, keep in mind memory isn't expensive these days, and an analog value can most of the time be stored as a 4-byte sgl without losing useful data *. (I don't have verification of this anywhere, but at one point a coworker went through every c-series module and concluded that there are no modules for which you need to use a dbl float -- ie replace "useful data" with "any data" above). If possible, it is always easiest to just keep all your data in memory. The biggest challenge is that labview somehow still isn't entirely 64-bit, but if you're not using rt/fpga I think you can get away with it these days...gigs and gigs of data to work with.
  9. TDMS files sound like a reasonable choice for that kind of data, especially at low rates, but so does sqlite. Seems like the pros of TDMS are you can hand it off to someone and they can open it in excel, but for sqlite you need to make a reader. TDMS also is a bit more flexible about schema. While you wouldn't want to change channel counts within a file for example, you can change between files without much work. For a database you'd need to come up with a schema that makes sense for you (this doesn't mean its complicated, it probably just means you have the columns "time", "channel number" and "value"). Sqlite lets you query easily for different conditions, while for tdms you'd have to write all the logic yourself for queries. Neither of the cons is particularly bad for the situation you've described. Your last point ( write TDMS files, then store records on which files hold what data ) is basically what diadem/datafinder do for tdms, as I understand them. So depending on your particular work environment, you may already have access to those tools which may save you some time.
  10. the openg libraries are 'compatible with labview' which meets these standards: https://decibel.ni.com/content/docs/DOC-8981 but otherwise, no clue. Definitely a weird requirement. Depending on the specific requirements, you might be able to focus in the restriction by only using code which uses native labview functions (ie no CLFN/dll calls). Then you can figure out what types of malicious things labview could do -- for example, file IO functions are probably out, as would be any code which uses VI server (could call other code dynamically). These are both pretty easy to verify. From the product page this leaves you: -Array manipulation -String manipulation -Application control -File handling -Zip files -Timing tools -MD5 digest implementation -Error handling -Variant and flattened data manipulation Still a pretty good set of tools and I can't think of any way this could affect other machines in a malicious way. Of course if the definition is 'you must inspect every function'...well, have fun
  11. First, as mentioned I'd use an async call per target. You can always parallelize the calls internally later at least launching per target doesn't mean you get hung up if one is slow. If you search example finder for 'async' I think there is a good example that either is, or used to be called benchmarking asynchronous calls. It talks to a set of web servers over http and demonstrates the performance advantages and disadvantages of each. keep in mind while looking at the code that fetching google.com is a different profile than fetching 100 100 mb files, but the example is still good. Then you need to decide on your transfer mechanism. I think you can mount network share drives and have windows copy files for you, but I'm not 100% sure and I've no idea about performance. The other good APIs built into labview are FTP, HTTP, and webdav. For HTTP, I've used apache, and for ftp I've used filezilla. I've never set up a webdav server but its basically http and appears to be built into windows. Each protocol has its ups and downs... HTTP(s): High overhead, not probably a big deal with a dedicated server, a closed network, and large files. The biggest issue is the sequential nature (request-response), meaning you need to create multiple connections and request in parallel just like your web browser. This is where the parallel for loop can come in handy. Note that each Handle in the API has a mutex of some kind so you have to create N handles, rather than calling N parallel requests on 1 handle. Another issue which you may or may not hit is that HTTP uses dll calls, and each dll call blocks the thread you're running in. If you have too many outstanding requests, suddenly your application locks up until one of them completes. FTP: The functions are old and you probably don't want to look inside but they work and are pretty quick. Has similar issues to HTTP in that theres a good amount of overhead for every file. The API, if I remember correctly, has a function called get multiple files which literally just fetches the files one by one in sequence....so you'll have to parallelize this for good performance too. Just uses TCP calls under the hood, so you don't have the dll lock-up issue. Webdav: The base functions use DLL calls but you can avoid that issue with the async api, where you just register for events on a set of request. When the file transfer completes, the event fires and you handle it. This is pretty fast and you don't have to do much besides tell it what to download, and it'll do it for you. Not sure how performance compares to FTP, but the individual low-level calls are about on par, slightly slower than FTP in my tests.
  12. Careful, if you don't use a CLI you have to come up with your own new, custom solution to making your code hard to use and your features undiscoverable. Stop reinventing the wheel people!
  13. could try using vformat motion jpeg (mjpeg) or one of the formats supported by your computer per this function: http://zone.ni.com/reference/en-XX/help/370281U-01/imaqvision/imaq_avi2_get_codec_names/ I'm assuming if imaq can't find it there, the open avi func will fail.
  14. If time isn't important, the easiest thing would probably to break it into two parts. First find and replace, then do the rest of your processing. However it doesn't sound like you have a really big file (maybe a few 10s of MB) so maybe if you can take a screenshot of your core loop we could provide some more specific feedback on how to avoid data copies. If you can't, then drjd's recommendation is the right one with two additions: -If it wasn't clear from his post, a lot of the string copies don't actually produce output strings if you don't wire them. So for something like match pattern (I think thats one of the ones where it works) you can say "look through my 10 MB string for '|'" and it won't actually allocate two new X MB strings -- it will just tell you "I found it at index N". -If the mystery string is in a file already, you can read it in line by line (if it has lines) or chunk by chunk (it looks like each chunk is a fixed size, but even if it isn't you can still do this you just have to be sure to use the leftovers).
  15. The only real answer is "the reverse of sending" but the data has to something reasonable for python to parse. If you are flattening the data in labview to binary rather than a more standard interchange format (I didn't look at the code) you should make sure you understand how labview stores data in memory. Also be careful that the flatten to string defaults to big endian and to prepending lengths to everything. Might be easiest to look at this example: https://decibel.ni.com/content/docs/DOC-47034 or https://github.com/ni/python_labview_automation and this may be useful too: https://decibel.ni.com/content/docs/DOC-46761
  16. Yeah I've tried to compile things for vxworks. Even simple things suck. I know pharlap is just crappy windows 95 but I'd still rather not edit the source to get it working. I don't trust myself to maintain a working build process. Oh meh. How irritating. I don't think the pngs do but its worth checking. Thats the part of the system I haven't really gotten around to testing properly yet :/
  17. I'm stuck with plain files until NI moves PXI over Linux-RT (I haven't heard any official confirmation this will happen, I'm just assuming they didn't decide to upgrade the entire cRIO line while leaving their high performance automated test hw on a 10 year old OS). It sounds like pharlap doesn't support sqlite. Ah so thats what you mean by large . Nothing like that on my end, but it occurs to me one of the things I'm doing (in the category of 'stuff i might just flatten to string' is streaming images from a server to a client. Basically I flatten the image separately to a png and then put that in a json object with some metadata (time, format, etc...). My point here is that as part of the json generation step for me, I'm passing in a large binary string which has to be escaped and handled by the flatten to json function. I realize this is probably a bit unusual but I thought I'd mention it.
  18. I usually use it for config files or for any debug information (web service output, syslog messages, etc) which might be read by a human. I'm not sure what quantity makes the data 'large' but it could certainly be a few pages of data if you have arrays. For right this moment, I'm also using it for TCP messages but I may swap that over to flattened strings -- even if theres no real reason, as a rule I try to avoid using lv proprietary formats. For the cfg use performance isn't a huge deal but for everything else I feel like the lava api is too slow for me to run to it for everything. This may be unfair, but in general for those uses I'll pull out the built-in flatten to json. One thing I can say for sure is I've never needed the in-memory key-value features of the lava API. I just use the json stuff as an interchange, so all those objects only ever go in one function. The other issue I've had with it is deploying to RT...labview doesn't like some objects on RT, and the lava API fits in that category. Unsure why but it caused a lot of headaches a few months back when I tried to use it -- ended up just reverting. Associated with my main usage, the things I'd love to see are 1-Handle enums and timestamps and similar common types without being whiny about how its not in the standard like the built-in API is. --->This is just because I generally do a quick flatten/unflatten for the cfg files, syslog, and tcp messages. Using the lv api you have to manually convert every offending element, which soaks up any speed boost you get from using the built-in one. 2-Discover and read optional components (technically possible to read optional with lv api, but pretty wasteful and also gross. Unless there is magic I don't know, there is no way to discover with the built-in api). --->Again on the cfg side, being able to pull a substring out as a 'raw json object' or something and pass that off to a plugin would let you nicely format things that might change. On the generation side, letting the plugin return a plain json object and appending that into the tree is handy too. For the higher-speed code I guess I don't really need this. 3-I love the lava api's pretty-print. --->Its just handy for debugging and, for the cfg files, its nice to be able to easily read it. Not important for the TCP/syslog use cases. (It occurs to me it would be easy to use the lava api for this too, since for config files the slower speed doesn't matter so much).
  19. The tree is just an xnode if I remember correctly. I know I pulled it out for a similar purpose and it worked fine -- one small issue i think related to empty variants, or variants of variants, something like that, but otherwise great.
  20. I was under the impression that the main advantage of postgres was if you were willing to write modules for it to do fast custom processing on the database side (ie move the code to the data). If you just want a standard sql database I got the impression postgres was only ok.
  21. mysql with the tcp connector (https://decibel.ni.com/content/docs/DOC-10453) so the cRIOs can talk to it for central data storage. For some queries (historical data) the db connectivity toolkit is faster, but mysql is slow as a historical database server anyway so I probably won't use it for that in the future -- it took a lot of tweaking and a lot of ram to get it to work at all. I may end up using your sqlite library for configuration data on my next project but I haven't gotten around to checking that it supports all the OSs I need (definitely pharlap, maybe vxworks).
  22. Oh...no. I mean NI System Configuration API, for managing software etc. on RT targets and in theory doing other stuff with hardware but people who really use it for this totally get 100 pts extra credit. Very creative name obviously, and the shortened form is nisyscfg: http://zone.ni.com/reference/en-XX/help/373107E-01/nisyscfg/software_subpalette/ Basically everything is synchronous and everything totally ignores its timeout. To quote the help: connect timeout in ms specifies the time in milliseconds that the VI waits before the operation times out. The default is 4000 ms (4 s). In some cases, this operation may take longer to complete. *by 'some cases' they mean 'pretty much all cases' and by 'longer' they mean 'go fetch a snack'.
  23. Just to be sure, are you aware of the existing labview api for dds http://sine.ni.com/nips/cds/view/p/lang/en/nid/211817 It sounds like you're trying to implement something custom so I'm guessing it won't work, but it only takes a moment to double check
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.