Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. I wonder who would go through the trouble of embedding secret code into the vi when you can just put code on the block diagram and hide it behind a structure.

    Quote

    If I understand correctly only vis that I wrote are safe to use

    Well thats kind of always been the case regardless of this vulnerability. Its code, you should only run code from trusted sources or after inspection.

     

    Its also funny that the vulnerability page shows labview nxg which gets rid of the VI format entirely

  2. Can you explain your goal? Do you want a continuous stream of images to python, or just a sequence? Whats the purpose of the time interval array? Are those actually all different delays or fixed? Is there a reason your camera is plugged into a cRIO rather than your computer with the python script? What is the latency you can permit between acquiring some images and getting them on the python side?

    If you can plug your camera into the computer I'd just use: https://pypi.org/project/pynivision/

    If you need to stream them for whatever reason, and the time intervals are all constant, I'd suggest setting your camera to produce data at exactly that frame rate and stop timing the loop at all (which doesn't do much anyway since you are asking imaq for a buffer number rather than the latest frame). If the time intervals are not constant but can be made to be multipliers of some common factor, do the same thing but use buffer number*N to pull out the frames you care about.

    If you can handle high latency, which it seems like you can, maybe just write the images to disk and then use webdav or ftp to pull the files off from your host computer? This would be a lot more straightforward to implement.

  3. Strangely enough its just a property:

    https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000004Aa2SAE

    I dont know if we are thinking of the same thing, but what came to mind with software triggering was this procedure:

    1. Change trigger mode on camera to software trigger
    2. Read buffer #
    3. Set exposure time
    4. read back exposure time (poll until = new value)
    5. fire software trigger
    6. ignore that new image
    7. turn trigger mode back to hardware triggering (or free run if thats what you use).

    Seems like this would be a pretty fool-proof way of ensuring the change? If it works you can then start removing things (for ex I think 5/6 are redundant since you paused triggering in 1)

    • Like 2
  4. 6 hours ago, hooovahh said:

    I have an update, but am unsure how to handle this in Github.  I have an account but I don't know how to batch upload replacing all files, removing the ones not used, and doing so in a way that doesn't look like a bunch of small commits. 

    I may be misreading it but it sounds like you just want to:

    1. Clone the repo
    2. Delete all the code (leaving the .git folder and presumably the readme/etc if one exists)
    3. Copy in your code
    4. Commit (git add -all; git commit)
  5. Depends on what you mean by "test data". If you just mean data, theres a small company that has a web service platform (I think they do hosted AWS but I may be wrong): https://www.daq.io
    Or you could ask your sales rep about the NI data management software suite: http://www.ni.com/data-management-software-suite/
    You could also use a more non-labview option, like influx cloud: https://www.influxdata.com/products/editions/

    If you mean test data like "textual logs of stuff", then the common open source option would probably be elastic search https://www.elastic.co/products/elasticsearch which Ive read and heard is hard to set up, but might be easy enough if you use bitnami: https://bitnami.com/stacksmith which is how I exclusively set up anything I want to use that consists of more than a single executable. Docker might also be an option in that realm.

    If you mean test data like "teststand reports", maybe the new systemlink from NI would be a good fit: http://www.ni.com/en-us/shop/electronic-test-instrumentation/application-software-for-electronic-test-and-instrumentation-category/systemlink/monitor-automated-tests-with-systemlink.html
    This also supports the plain data use case by letting you upload tdms files and store tags to be displayed in a dashboard.

    Edit: Just because, I thought I'd google "test result storage database" and it came up with this: https://maintainable.com ...for a cloud based company their website is odd. But, they have a bullet point: "Integrates with industry software like NI TestStand".

  6. I don't know how secure the database itself would be when exposed to the internet. I suppose in theory it could be hardened, but theres a lot of surface area you have to understand. I'd lean towards a web server myself, as this lets you tightly restrict what you're going to let clients do. For encryption, I know the labview http client supports tls, and it looks like mysql does too, if configured. From labview you'd have to use the odbc/activex connector, since labview doesn't have built-in tls support (just the http client). Https is almost certainly more firewall friendly.

    • Like 1
  7. I dont know if theres a guide, but generally writing a wrapper that is (trivially) labview compatible means:

    • Determining thread safety of the underlying code (and trying to keep things thread safe if possible -- the alternative is that every DLL call needs to run in the UI thread)
    • Only exposing simple return values (void, numeric, pointer) -- don't try to return a struct by value
    • Only exposing simple parameter values (string, numeric, string array, numeric array, or pointer) -- trying to pass a struct by value can be done, but you'll have to think about byte alignment on different platforms.
    • Trying to keep things synchronous if possible. Any callbacks have to be part of the wrapper. Labview exposes two message mechanisms which can be called from C, firing an occurrence and firing off a user event. If your underlying library uses a callback mechanism to notify clients that an operation has completed, you would write a callback inside of your wrapper which fires the occurrence or generates the user event when the operation is complete. Presumably then the labview code would either have the data (as part of the event) or call the DLL to grab the resultant data.
      • The program files/ni/labview/cintools directory contains various headers you can use to access some of the innards of labview, including the above. There used to be a help page on ni.com but google can no longer find it. Included in the tools are also the definitions of labview arrays and labview strings, which could make manipulation of large data sets easier/more efficient.

     

    • Like 1
  8. You could probably get better results by using the quotient function you have down below as your double value, rather than doing a floating point multiply.

    You can also just continue with the pattern you have followed for subsecond values divide seconds by days, days by years, and find the offset from 1904. The hard part is handling things like leap seconds.

    You could try to use the seconds to date time after converting to a standard timestamp using one of these methods:

    1.

    The fpga timekeeper has a nanosecond to labview time conversion: https://forums.ni.com/t5/Sync-Labs/NI-TimeSync-FPGA-Timekeeper-Getting-Started/gpm-p/3543895

    under /timekeeper/utilities

    2.

    Move the bits around and type cast yourself. Timestamp format is defined here: http://www.ni.com/tutorial/7900/en/

    {
       (i64) seconds since the epoch 01/01/1904 00:00:00.00 UTC (using the Gregorian calendar and ignoring leap seconds),
       (u64) positive fractions of a second
    }

    Your time in ns/1E9 gives the first value

    remainder of the above divide *(2^64)/(1E9) is what you need for the second value if I'm reading the document correctly, or multiplying by ~18446744074. 

    you put these in a two-element cluster in the order above, and then type cast to a timestamp.

    • Like 1
  9. I tried to (and succeeded, for a given value of succeeded) use teststand as a semi-headless sequencer with a network front-end, but often think may have been a mistake. The driving factor behind my reconsideration is I've now used it for a few months and the API is a pain to use and its pretty buggy in important areas. We're not going to switch at this point because I did eventually get it to work fairly consistently, but that could change if new requests come in -- every new feature seems to take more and more time to get right.

    The buggy part is easy to describe: occasionally my whole labview instance will hang when I start a VI with a ts app manager control on it, occasionally it will fail to shut down, and I see sometimes bizarre behavior between traces, like on first run a sequence will trace into the process model but on subsequent runs it will jump directly to the main sequence (with no setting changes in between).

    Specific to what I'm doing, I rely on the API for everything and its a big challenge to wrap my head around edit vs runtime objects, different trace modes and flags, error vs failure behavior, threading concerns (at what points am I allowed to safely access engine data?), etc. Fundamentally what I want to do with teststand is this set of methods: startexecution(), getStatus(). Unfortunately, getStatus in reality balloons out into this crazy nest of callbacks and API calls and trying to reconstruct the state of the system through a bunch of trace messages. Also, specific to my desire to run sequences from the API, there are a ton of modal dialogs which pop up whenever you do something wrong. Most are disable-able (once you figure out all the flags to set), but a few related to closing references are not. Modal dialogs are obviously a challenge to work around if there is nobody standing at the UI ready to press buttons.

  10. Oh yeah, if you have performance needs writing directly to the database can be quite poor (depending on exactly how you have the database set up, the indices involved, the disks involved, etc). Rather than logging to sqlite as shaun does I just write my data to a binary file as fast as I can, and have a background service shove it into the database as fast as it can. Essentially using the disk as a queue.

  11. 2 hours ago, hooovahh said:

    Great idea on the package manager, but this is only going to be a solution for those who are online, and are willing to download even more.  Having an offline installer is always important, for archival, but also for convenience for future installs.

    Technically, although this is absolutely terrible and they should be ashamed, you can copy the package manager + package cache to an offline machine. 

    And yes, this is honestly what they suggest you do: http://www.ni.com/tutorial/53918/en/

    My online:offline device ratio is about 1:30 so a few weeks back I found out that you can, and immediately did, clone the entire cRIO linux package repository :(

     

  12. well specifically its the return data thats the issue, and the answer is you have to benchmark for your data set. I'd expect the crossover point to be <1 MB but its been a while since I tried it. For inserting data, or fetching configuration, or asking the database to give you a processed result (avg of x over time period T) this works great, for grabbing data in bulk out of the db its uselessly slow. 

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.