Jump to content

smithd

Members
  • Content Count

    753
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. I wonder who would go through the trouble of embedding secret code into the vi when you can just put code on the block diagram and hide it behind a structure. Well thats kind of always been the case regardless of this vulnerability. Its code, you should only run code from trusted sources or after inspection. Its also funny that the vulnerability page shows labview nxg which gets rid of the VI format entirely
  2. Can you explain your goal? Do you want a continuous stream of images to python, or just a sequence? Whats the purpose of the time interval array? Are those actually all different delays or fixed? Is there a reason your camera is plugged into a cRIO rather than your computer with the python script? What is the latency you can permit between acquiring some images and getting them on the python side? If you can plug your camera into the computer I'd just use: https://pypi.org/project/pynivision/ If you need to stream them for whatever reason, and the time intervals are all constant, I'd suggest setting your camera to produce data at exactly that frame rate and stop timing the loop at all (which doesn't do much anyway since you are asking imaq for a buffer number rather than the latest frame). If the time intervals are not constant but can be made to be multipliers of some common factor, do the same thing but use buffer number*N to pull out the frames you care about. If you can handle high latency, which it seems like you can, maybe just write the images to disk and then use webdav or ftp to pull the files off from your host computer? This would be a lot more straightforward to implement.
  3. https://forums.ni.com/t5/LabVIEW-Shortcut-Menu-Plug-Ins/Replace-Value-Property-with-Local-Variable-llb/ta-p/3538829 you may wish to edit it to support all property nodes, not just "value" props, but thats not hard (its most likely just deleting a bunch of code that filters the results). start here: https://forums.ni.com/t5/LabVIEW-Shortcut-Menu-Plug-Ins/NIWeek-2015-Presentation-on-Shortcut-Menu-Plug-ins/ta-p/3521526 install: https://forums.ni.com/t5/LabVIEW-Shortcut-Menu-Plug-Ins/How-to-install-plug-ins-that-you-download/ta-p/3517848
  4. what? @Aristos Queue may be interested in and/or be able to shed light on how to do this better?
  5. Today I learned that labview aborts connections immediately even if you dont wire in abort, because... " abort is reserved for future use. " (help)
  6. reshape array + index, inside of a diagram disable structure reshape array takes any dim array and returns 1-dim array. Index takes 1dim array and returns element. Disable structure makes sure the code doesn't run and that you always get the default value for the data type. any aggregation function (add array elements, array max and min, etc) will work too, but I think the reshape/index is clearer.
  7. Many of the calls are documented in the help, so yes I'd say we are supposed to know. Some of the pointer calls are wrapped here Chapter 6 of this also describes the functions all in one place, although its quite old: http://www.ni.com/pdf/manuals/370109a.pdf
  8. Strangely enough its just a property: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000004Aa2SAE I dont know if we are thinking of the same thing, but what came to mind with software triggering was this procedure: Change trigger mode on camera to software trigger Read buffer # Set exposure time read back exposure time (poll until = new value) fire software trigger ignore that new image turn trigger mode back to hardware triggering (or free run if thats what you use). Seems like this would be a pretty fool-proof way of ensuring the change? If it works you can then start removing things (for ex I think 5/6 are redundant since you paused triggering in 1)
  9. the thread just below this would be a possible option (its not browserless, just doesn't need a plugin).
  10. Dont know if its exactly what you're looking for, but octopus deploy has some features in common. Ansible does too, but I think its more like salt in terms of needing more dev effort. Edit: oh, and puppet
  11. If the value is null, you could probably just leave it out from the insert (ie insert using a 3 element array and a 3 element cluster, leaving out the null value). Worst comes to worst, you could create the insert string yourself and execute it, but I know thats not ideal.
  12. I may be misreading it but it sounds like you just want to: Clone the repo Delete all the code (leaving the .git folder and presumably the readme/etc if one exists) Copy in your code Commit (git add -all; git commit)
  13. Theres also...a "solutions" folder on the github
  14. Depends on what you mean by "test data". If you just mean data, theres a small company that has a web service platform (I think they do hosted AWS but I may be wrong): https://www.daq.io Or you could ask your sales rep about the NI data management software suite: http://www.ni.com/data-management-software-suite/ You could also use a more non-labview option, like influx cloud: https://www.influxdata.com/products/editions/ If you mean test data like "textual logs of stuff", then the common open source option would probably be elastic search https://www.elastic.co/products/elasticsearch which Ive read and heard is hard to set up, but might be easy enough if you use bitnami: https://bitnami.com/stacksmith which is how I exclusively set up anything I want to use that consists of more than a single executable. Docker might also be an option in that realm. If you mean test data like "teststand reports", maybe the new systemlink from NI would be a good fit: http://www.ni.com/en-us/shop/electronic-test-instrumentation/application-software-for-electronic-test-and-instrumentation-category/systemlink/monitor-automated-tests-with-systemlink.html This also supports the plain data use case by letting you upload tdms files and store tags to be displayed in a dashboard. Edit: Just because, I thought I'd google "test result storage database" and it came up with this: https://maintainable.com ...for a cloud based company their website is odd. But, they have a bullet point: "Integrates with industry software like NI TestStand".
  15. I don't know how secure the database itself would be when exposed to the internet. I suppose in theory it could be hardened, but theres a lot of surface area you have to understand. I'd lean towards a web server myself, as this lets you tightly restrict what you're going to let clients do. For encryption, I know the labview http client supports tls, and it looks like mysql does too, if configured. From labview you'd have to use the odbc/activex connector, since labview doesn't have built-in tls support (just the http client). Https is almost certainly more firewall friendly.
  16. You have a 2-hour build? I would die. I think my worst single-exe build time has only ever hit 15 minutes, and thats for classes on RT (truly, a mistake).
  17. I dont know if theres a guide, but generally writing a wrapper that is (trivially) labview compatible means: Determining thread safety of the underlying code (and trying to keep things thread safe if possible -- the alternative is that every DLL call needs to run in the UI thread) Only exposing simple return values (void, numeric, pointer) -- don't try to return a struct by value Only exposing simple parameter values (string, numeric, string array, numeric array, or pointer) -- trying to pass a struct by value can be done, but you'll have to think about byte alignment on different platforms. Trying to keep things synchronous if possible. Any callbacks have to be part of the wrapper. Labview exposes two message mechanisms which can be called from C, firing an occurrence and firing off a user event. If your underlying library uses a callback mechanism to notify clients that an operation has completed, you would write a callback inside of your wrapper which fires the occurrence or generates the user event when the operation is complete. Presumably then the labview code would either have the data (as part of the event) or call the DLL to grab the resultant data. The program files/ni/labview/cintools directory contains various headers you can use to access some of the innards of labview, including the above. There used to be a help page on ni.com but google can no longer find it. Included in the tools are also the definitions of labview arrays and labview strings, which could make manipulation of large data sets easier/more efficient.
  18. You could probably get better results by using the quotient function you have down below as your double value, rather than doing a floating point multiply. You can also just continue with the pattern you have followed for subsecond values divide seconds by days, days by years, and find the offset from 1904. The hard part is handling things like leap seconds. You could try to use the seconds to date time after converting to a standard timestamp using one of these methods: 1. The fpga timekeeper has a nanosecond to labview time conversion: https://forums.ni.com/t5/Sync-Labs/NI-TimeSync-FPGA-Timekeeper-Getting-Started/gpm-p/3543895 under /timekeeper/utilities 2. Move the bits around and type cast yourself. Timestamp format is defined here: http://www.ni.com/tutorial/7900/en/ { (i64) seconds since the epoch 01/01/1904 00:00:00.00 UTC (using the Gregorian calendar and ignoring leap seconds), (u64) positive fractions of a second} Your time in ns/1E9 gives the first value remainder of the above divide *(2^64)/(1E9) is what you need for the second value if I'm reading the document correctly, or multiplying by ~18446744074. you put these in a two-element cluster in the order above, and then type cast to a timestamp.
  19. I tried to (and succeeded, for a given value of succeeded) use teststand as a semi-headless sequencer with a network front-end, but often think may have been a mistake. The driving factor behind my reconsideration is I've now used it for a few months and the API is a pain to use and its pretty buggy in important areas. We're not going to switch at this point because I did eventually get it to work fairly consistently, but that could change if new requests come in -- every new feature seems to take more and more time to get right. The buggy part is easy to describe: occasionally my whole labview instance will hang when I start a VI with a ts app manager control on it, occasionally it will fail to shut down, and I see sometimes bizarre behavior between traces, like on first run a sequence will trace into the process model but on subsequent runs it will jump directly to the main sequence (with no setting changes in between). Specific to what I'm doing, I rely on the API for everything and its a big challenge to wrap my head around edit vs runtime objects, different trace modes and flags, error vs failure behavior, threading concerns (at what points am I allowed to safely access engine data?), etc. Fundamentally what I want to do with teststand is this set of methods: startexecution(), getStatus(). Unfortunately, getStatus in reality balloons out into this crazy nest of callbacks and API calls and trying to reconstruct the state of the system through a bunch of trace messages. Also, specific to my desire to run sequences from the API, there are a ton of modal dialogs which pop up whenever you do something wrong. Most are disable-able (once you figure out all the flags to set), but a few related to closing references are not. Modal dialogs are obviously a challenge to work around if there is nobody standing at the UI ready to press buttons.
  20. Its not so bad, you just need to think of it as a hash table that gets displayed as a tree. I think people get into trouble when they try to think of the tree control as a tree.
  21. I don't know if its for continuous measurements, but Eli's example here is I think a nice reference to look at: https://forums.ni.com/t5/LabVIEW-Development-Best/Measurement-Abstraction-Plugin-Framework-with-Optional-TestStand/ta-p/3531389
  22. Oh yeah, if you have performance needs writing directly to the database can be quite poor (depending on exactly how you have the database set up, the indices involved, the disks involved, etc). Rather than logging to sqlite as shaun does I just write my data to a binary file as fast as I can, and have a background service shove it into the database as fast as it can. Essentially using the disk as a queue.
  23. Technically, although this is absolutely terrible and they should be ashamed, you can copy the package manager + package cache to an offline machine. And yes, this is honestly what they suggest you do: http://www.ni.com/tutorial/53918/en/ My online:offline device ratio is about 1:30 so a few weeks back I found out that you can, and immediately did, clone the entire cRIO linux package repository
  24. well specifically its the return data thats the issue, and the answer is you have to benchmark for your data set. I'd expect the crossover point to be <1 MB but its been a while since I tried it. For inserting data, or fetching configuration, or asking the database to give you a processed result (avg of x over time period T) this works great, for grabbing data in bulk out of the db its uselessly slow.
  25. These are part of the design of the database, I wouldn't imagine any api-level tool to have "support" for them except as a direct SQL call. I would absolutely do what you're doing and use a GUI to design the database, and then access it through the API. Mysql workbench is pretty powerful, but can be confusing. I've always used a tool called HeidiSQL for working with mysql/mariadb databases. Its nicer in my opinion for learning with. Some other thoughts: Mysql has a TCP server https://decibel.ni.com/content/docs/DOC-10453 which is faster for small interactions the mysql command line is a pain to use but could be better for bulk imports (eg from a giant mass of old CSV files). As shown in the link, Heidi can help you. Postgresdb is becoming more and more popular (among other things, mysql is now owned by oracle and spent a while sort of languishing -- for my personal load of several TB of indexed data, postgres performed significantly better out of box than a somewhat optimized mysql). If you decided to go this route there are two libraries to consider (in addition to the db connectivity toolkit). What you described, having a bunch of old measurement data in csv files and wanting to catalog them in a database-esque way for performance and ease of use is literally the sales pitch of Diadem. Like, almost verbatim. It may be 1 to N files depending on the configuration. Indices can sometimes be stored in separate files, and if you have a ton of data you would use partitioning to split the data up into different files based on parameters (eg where year(timestamp)=2018 you use partition 1, where year=2017 use partition 2, etc.). You don't reference the file directly. You usually use a connection string formatted like this: https://www.connectionstrings.com/mysql/ You cannot, you must have a server machine which runs the mysql service. To the best of my knowledge, the only database which you can put on a share drive and forget about is sqlite, but they recommend against it. I had never used the MDB format before but it looks like that is similarly accessible as a file. As with 2, you generally don't edit the files manually. You access the database through the server which exposes everything through SQL. They do, but I think its in the TB range. If you reach a single database file that is over a TB, you should learn about and use partitioning which breaks it down into smaller files. Not sure, but I believe the truly giant websites like google have their own database system they use. More generally, they divide the workload across a large number of machines. As an example: https://en.wikipedia.org/wiki/Shard_(database_architecture) You will need to install the database server somewhere. Assuming you've set up some server thats going to host your data, then you just need the client software. If you use the TCP-based connector mentioned above, that client software is just labview. However, that connector has no security implementation and gets bogged down with large data sets. If you want to use the database toolkit, you'll need the ODBC connector and perhaps to configure a data source as shown here, although you may be able to create a connection string at runtime.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.