Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. Manually add the .lvproj file to the project (i.e. itself) then it will appear in the build source distribution.

    That could work, but you'd probably have to make sure the lvproj links to the source distribution rather than the original. I doubt app builder will do anything with the lvproj file.

     

    With the source dist option, you should also double check that your bitfile works correctly if you're using cRIO.

     

    Back to the original question, I see this as being two fundamentally different issues. 

     

    Basic protection is pretty simple as described above, you can use either a source dist or a lvlibp and remove the block diagrams. From an IP protection standpoint, you may want to remove front panels+turn off debugging on the code as well. After all if you can see the inputs and outputs to a function and run it, there might be a way to glean whats going on. Depends on what you're looking for/how paranoid you are. Also depending on your level of paranoia, I'd recommend walking through this series of whitepapers and making sure you at least understand what you are and are not doing to protect your system (just skip to the three links at the bottom): http://www.ni.com/white-paper/13069/en/

     

     

    The second issue is less clear, as I'm not sure what benefit you get from including the lvproj. It seems like the easiest answer is what shaun said or to make a post-build VI which copies the lvproj file to the source dist's location. But I am curious why this is important. To me you'd normally build your exe in dev, make an image, and then use the image for deployment. I'm confused why you'd want to make a specific configuration but not build an exe out of it. 

  2. As for your other question about plugins, theres a few links I'll put below but you should ask yourself these questions:

    Do you really need to do this?

    ...I mean really, are you sure?

    ......No really, its not fun.

     

    Ok so if you really do decide to do this, theres a few things that would come in handy:

    https://ekerry.wordpress.com/2012/01/24/created-single-file-object-oriented-plugins-for-use-with-a-labview-executable/

    https://decibel.ni.com/content/groups/large-labview-application-development/blog/2011/05/08/an-approach-to-managing-plugins-in-a-labview-application

    https://decibel.ni.com/content/docs/DOC-21441

     

    Depending on how many plugins you have and how many platforms you need to support and how many labview versions you support (Versions*OS*plugins=lots??) you should look into using a build server like jenkins.

     

    Finally, as for the actual code you might find this handy

    https://github.com/NISystemsEngineering/CEF/tree/master/Trunk/Configuration%20Framework/class%20discovery%20singleton

    its an FGV which locates plugins on disk or in memory, removes duplicates and allows you to load classes by an unqualified or qualified name (assuming they're unique). Its not particularly complex but it provides all the little wrapper functions we needed.

  3. I don't think this will help anymore (you said you can't modify the grandparent) but I've found it handy in the past for what I think are similar situations. Its essentially a nice way to describe what the dr. was suggesting:

    http://en.wikipedia.org/wiki/Template_method_pattern

     

    Here your top-parent provides a public static method and a protected dynamic method. This gives defined places for a child to override behavior but also makes sure the public method does what you want, like catching errors or handling incoming messages. That way you separate the functionality specific to the logic/algorithm from the functionality specific to the child. Its a really handy pattern.

  4. I also imagine people in favor of this model would be happy to mistake liking a piece of software vs liking a style of sw. For example for a long time my group has used yEd for diagramming things. yEd is fine, but its kind of irritating to use and last year I found a similar tool online (https://www.draw.io/) which is significantly better in terms of usability and a little bit better in terms of features. But I'd really love it if I could just download it and use it offline.

    • Like 1
  5. The tests are running for 6 hours now.

     

    I talked to an IT guy.

    They confirm that it is McAfee that is causing the problem.  But adding an exception is not possible.... They have to do a request to McAfee to add an exception worldwide  :oops:

     

    They just don't want to put any effort in it.

     

    I'll just call my executable labview.exe  :thumbup1:

    Now I'm sad I didn't see this earlier. As soon as I read your first post my immediate thought was...McAfee?

     

    Fun story: on some of the domain computers here at NI (can't figure out which ones or what is different), the labview help will take about 1-2 minutes to load any time you click on detailed help because McAfee is...McAfeeing. You go into task manager and kill mcshield (which is railing the CPU) and it works in an instant. Its truly amazing.

     

    And...yes, you can add exceptions. McAfee doesn't seem interested in making it easy, but it is possible. Renaming to labview.exe is a pretty sweet workaround though.

  6. There is a function called get lv class default value by name which you might be able to use. It requires that the class is already in memory, but if you follow a standard naming convention "DeviceA.lvclass", "DeviceB.lvclass" your user can just type in "A" or "B", you can format that into your class name, and then use that function to grab the right class.

     

    We've also done something more general for our configuration editor framework here: https://github.com/NISystemsEngineering/CEF/tree/master/Trunk/Configuration%20Framework/class%20discovery%20singleton

    Basically it lets you add a search folder and will try to locate the desired class by qualified name, by filename, or by unqualified name and then pass out the result. Its not particularly fancy but it helps avoid simple errors like case sensitivity and the like. (As a side note if you're likely to have classes where case sensitivity matters or if you have classes with the same unqualified name (Blah.lvclass) but a different qualified name (A.lvlib::Blah.lvclass vs B.lvlib::Blah.lvclass) then you shouldn't use this. But then you also probably shouldn't be doing those things.)

  7. I don't personally know of anyone in my group using IRIG with the timekeeper but it might help to better explain what it does. The timekeeper gives the FPGA a sense of time through a background sync process and a global variable which defines system time. It doesn't know anything about time itself -- its simply trying to keep up with what you, as the user of the API, say is happening. To give you an example with GPS, the module provides a single pulse per second and then the current timestamp at that time. You're responsible for feeding that time to the timekeeper and the timekeeper is responsible for saying "oh hey I thought 990 ms passed but really a full second passed, so I need to change how quickly I increment my counter". The timestamp is not required to be absolute (it could be tied to another FPGA, or to the RTOS), you simply have to give it a "estimated dt" and "true dt", basically. With that in mind, if you can get some fixed known signal from the IRIG api you mentioned, the timekeeper could allow you to timestamp your data. It can theoretically let you master the RT side as well (https://decibel.ni.com/content/projects/ni-timesync-custom-time-reference) but I'm not sure how up-to-date that is or if that functionality ever got released.

  8. I think git the command line is pretty unusable but I've found sourcetree to be the correct subset of functionality to do everything that I want. I don't particularly care for the lets-fork-everything style, so we basically use git as a non-distributed system. The easy renaming of files, lack of locks, and the ability to instantly create a local repository are all excellent. And, as the article said, a lot of people use git because of github. Thats fine too. All told I find it to be just a lot simpler to work with than perforce or tfs (I haven't used TSVN in ages).

  9. Hi All,

     

    I would like to do some moving averaging on my FPGA based on building the sum of the values to around 200/N data points and then removing the oldest value and adding the newest value that way I'm only needing to acquiring one new point once the data point have reached the 200 limit but I have 16 channels and FPGA does not support 2-D arrays and I can do it with 16 1-d arrays but I don't think there will be enough resources on the FPGA. Anyone have any ideas how i can do this.

     

    I guess the main challenge is having a way to continuously store the last 200 or N points so that I can subtract the oldest value other than a two d array or multiple 1-d arrays. I am trying to do it this way as well so that I don't need to wait each time and acquire 200 or N samples and then averages them each time for a new averaged value.

     

    Kindly let me know if any suggestion.

     

    Thanks in advance.

    Pravin

    For a fixed number you could use a feedback node with delay=200. For an unknown number you can use any number of memory blocks, either an array, an actual block of memory, or a FIFO. These all have advantages and disadvantages depending on how much space you have and if you need to have access to the data except to do an average. However the basic concept is exactly the same as what is shown in that other thread.

  10. attachicon.gifAvg_Running.pngThanks for the input.

    how to convert below code in to FPGA

    Depends on how fast you want it to run. You could of course store an array on the FPGA and sum it every time, but that will take a while for a decently large array. If thats fine, then you're close, you just need to remember that FPGA only supports fixed-size arrays and that the order in which you sum samples doesn't particularly matter. You really just need an array, a counter to keep track of your oldest value, and replace array subset.

     

    If you do need this to run faster you should think about what happens on every iteration -- you only remove one value and add another value. So lets say you start with (A+B+C)/3 as your avg. On the next iteration its (B+C+D)/3. So the difference is adding (D/3), the newest sample divided by size, and subtracting the oldest sample/size (A/3).

  11. Wow this changes things a lot. We have timed loops all over the place (CAN driver, datalogging engine, graphing engines, automation engine, ...). Are you suggesting to replace them with regular WHILE loops?

     

    Edit: Can you direct me to a resource explaining the cost of timed loops? I couldn't find anything but advantages online...

    Ah, so you want documentation. Thats tougher. I found this (http://digital.ni.com/public.nsf/allkb/FA363EDD5AB5AD8686256F9C001EB323) which mentions the slower speed of timed loops, but in all honesty they aren't that much slower anymore and you shouldn't really see a difference on windows where scheduling jitter will have way more effect than a few cycles of extra calculations.

     

    The main reasons as I understand it are:

    -It puts the loop into a single thread, which means that the code can run slower because labview can't parallelize much

    -TLs are slightly slower due to bookkeeping and features like processor affinity, and most people don't need most of the capabilities of timed loops.

    -It can confuse some users because timed loops are associated with determinism, but on windows you don't actually get determinism.

     

    Broadly speaking, the timed loop was created to enable highly deterministic applications. It does that by allowing you to put extremely tight (for labview) controls on how the code is run, letting you throw it into a single thread running on a single core at really high priority on a preemtive scheduling system. While these are important for determinism in general you're going to get better overall performance by letting the schedulers and execution systems operate as they'd like, since they are optimized for that general case. Thus, there is a cost to timed loops which makes sense if you need the features. On windows, none of those features make your code more deterministic since windows is not an RT OS, so the benefits (if any) are usually outweighed by the costs.

     

    As for if I'm suggesting you replace them all -- not necessarily. Its not going to kill your code to have them, of course, so making changes when the code does function might not be a good plan. Besides that, it sounds like you're using absolute timing for some of these loops, which is more difficult to get outside of the timed loop (its still, of course, very possible, but if I'm not mistaken you have to do a lot of the math yourself) So, for now I'd say stick with the workaround and maybe slowly try changing a few of the loops over where it makes sense.

  12. I just created a very lean exe that will simply pop-up a message when the time stamp will be bad. I'll post here to say whether I could reproduce the issue or not.

     

    I've thought about the workaround you suggest, but since the primitive you talking about doesn't have an error input, it would execute in parallel with other constructs inside the loop, which doesn't provide the same accuracy. Unless I use a sequence of course, but I don't want to add more code than is necessary in this loop since I need it to run fast.

     

    Thanks for your reply :)

    In order to better schedule the code, a timed loop is placed into its own higher-priority thread. Since there is one thread dedicated to the loop, labview must serialize all timed loop code (as opposed to normal labview where code is executed on one of a ton of different threads). This serialization is one of the reasons the recommendation is only to use them when you absolutely need one of the features, generally synchronizing to a specific external timing source like scan engine or assigning the thread to a specific CPU. So it will not truly execute in parallel, although of course you still have to show labview the data dependency in one way or another.

    For some reason this is not documented in the help for the timed loop, but instead for the help on how labview does threading: http://zone.ni.com/reference/en-XX/help/370622K-01/lvrtbestpractices/rt_priorities/

     

    I also wanted to note that a sequence structure does not add overhead usually, although it can make code execute more slowly in some specific situations by interfering with the normal flow of data. That shouldn't be the case here as you're only using a single frame. Its essentially equivalent to a wire with no data.

     

    Also, timed loops are generally not recommended on windows, which it sounds like thats what you're using. They may still be used, and should function, but because windows is not an RTOS you won't actually have a highly deterministic program. This makes it more likely that the costs of the timed loop outweigh the benefits.

    • Like 1
  13. "I had multiple instances of strange cross dependencies loading so much that by the end of the day most of my source code was being loaded to the crio.  This was causing major problems since GUI libraries were being loaded as well and were causing failed builds.  By the end, i couldn't' figure what was loading what and moved everything out of libraries.  This action alone quickly cleared up the issues i was having and was a huge lesson learned."

    Hmm..., this just means the code in the libraries you managed as assembled into these libraries had unintended dependencies.  Sure, pulling items out of libraries can separate things until you get rid of unused dependencies, but then the groupings are obsoleted as well.  The better thing is to fix the libraries so they have the proper dependencies.  (I would fix--or not use--any library that has improper dependencies, as far as possible.)

    An easy way to see the dependencies of a library (.lvlib)--or anything else--is to open a new project and drop the library (or other item) alone into the the new project.  Then look at the dependencies.  My advice is to get used to paying close attention to what is in the dependencies--for everything--not just libraries.  This will help you make the code design and architecture better.  Again, this isn't a problem with the library concept itself but misapplication of it in specific libraries.  It seems clear that this sort of management (what belongs in a project, or in a library, or in a class, and how these link to one another) deserves more attention in training or other materials.

     

    Paul

    I'm with paul on this one. I think I said this earlier in the thread but my group ran into some of the same issues on a project and ended up in a pretty terrible state, with things like the excel toolkit being pulled onto the cRIO. Careful management of dependencies is pretty easy to ignore in LabVIEW for a while but it bites you in the end.

  14. BUMP. I think this is an issue worth to worry about. Do you think I should request NI to create a CAR?

    If you can reproduce it reliably and can narrow it down to a reasonable set of code, absolutely. I wasn't able to find anything like this in the records when I did a quick search a moment ago. Obviously the simpler the code, the better. Also be sure to note down what type of RT target you're using (I didn't see that in your post above) and then any software you might have installed on the target, esp things like timesync which might mess with the time values in the system.

     

    For now, I'd say the easiest workaround would be to just get the current time at the start of each loop iteration using the normal primitive.

  15. The official advice is to use some arbitrary primitive hidden in the utilities directory (VariantFlattenExp.vi) that takes a version number  :blink:. I wonder what potion they were drinking when they came up with that gem?

    Wow, awesome. I was under the impression you could just use variant to flattened string which includes the type info, but maybe your VI is what I heard about. And that VI also has a different connector pane than everything else, 3:2 :/

  16. After many tests, we were still falling short of our target.  Until I read Shaun's post.  I noticed that in your final example, you're using a shift register for the array, instead of just connecting the array through the loop.  I had been connecting it through the loop - making this change made all of the difference for us!  So I assume that shift registers avoid copying the data on every cycle, while "normal?" connections don't?

    This is very tricky, and I definitely don't totally understand it. Also, despite the NI under my name I am not remotely part of R&D and so may just be making this stuff up. But it seems to be mostly accurate in my experience.

     

    LabVIEW is going to use a set of buffers to store your array. You can see these buffers with the "show buffer allocations" tool, but the tool doesnt show the full story. In the specific image in Shaun's post, there should be no difference between a tunnel and shift register, because everything in the loop is *completely* read-only, meaning that LabVIEW can reference just one buffer (one copy of the array) from multiple locations. If you modify the array (for example, your in-place element structure), it means there has to be one copy of the array which retains the original values and another copy which stores the new values. Thats defined by dataflow semantics. However, labview can perform different optimizations if you use them differently. I've attached a picture. The circled dots are what the buffer allocations tool shows you and the rectangles are my guess of the lifetime of each copy of the array:

    post-52313-0-68106900-1423086467.png

     

    There are three versions. In version 1 the array is modified inside the loop, so labview cannot optimize. It must take the original array buffer (blue), make a copy of it in its original form into a second buffer (red) and then change elements of the red buffer so that the access downstream can access the updated (red) data.

    Version 2 shows the read-only case. Only one buffer is required.

    Version 3 shows where a shift register can aid in optimization. As in version one we are changing the buffer with replace array subset, but because the shift register basically tells labview "these two buffers are actually the same", it doesn't need to copy the data on every iteration. This changes with one simple modification:

    post-52313-0-14714600-1423086722.png

    However you'll note that in order to force a new data copy (note the dot), I had to use a sequence structure to tell labview "these two versions must be available in memory simultaneously". If you remove the sequence structure, LabVIEW just changes the flow of execution to remove the copy (by performing index before replace):

    post-52313-0-55275300-1423086868.png

     

    For fun, I've also put a global version together.

    post-52313-0-55548100-1423086997.png

    You'll note that the copy is also made on every iteration here (as labview has to leave the buffer in the global location and must also make sure the local buffer, on the wire, is up to date).

     

    Sorry for the tl;dr, but hopefully this makes some sense. If not, please correct me :)

    • Like 2
  17. With a timed loop I believe it defaults to skip missed iterations. In the configuration dialog there should be a setting ("mode" I think) which tells it to run right away. However if this is a windows machine you shouldn't be using a timed loop at all, as it will probably do more harm than good. And if this *isnt* a windows machine, then railing your CPU (which is what changing the timed loop mode will do) is not a good idea. -> Just use a normal loop.

     

    As for the actual problem you're encountering, its hard to say without a better look at the code. You might use the profiler tool (http://digital.ni.com/public.nsf/allkb/9515BF080191A32086256D670069AB68) to give you a better idea of the worst offenders in your code, then focus just on those functions. As ned said, the index should be fast and isn't likely to be the performance issue. Copying a 2D array (reading from a global) or any number of other things could be the problem.

  18. However, it would slow things down a little bit because I'd have to host it on the network. I can see a case study coming :)

    Why couldn't you install the database on the same machine? Local comms should be fast and depending on how complex your math is you might be able to make the db take care of it for you in the form of a view or function.

     

    My array is an array of GOOP4 objects - not pointers. A GOOP4 object creates a DVR of its attributes and type casts it into a U32, which is stored in a cluster. That's 4 bytes.

    Unless I'm mistaken, he was pointing out that the representation of data in memory is in the form of pointers (http://www.ni.com/white-paper/3574/en/#toc2, section "What is the in-memory layout..." or http://zone.ni.com/reference/en-XX/help/371361H-01/lvconcepts/how_labview_stores_data_in_memory/). So you have an array of handles, not an array of the actual data. If you replace an array element you're probably just swapping two pointers, as an example.

  19. I have a feeling  cause hiccups when you have dependencies shared by both the windows target and the real time target- for example you have a class that works on the RT side, and a typedef control within that class that gets used somewhere on the windows/HMI side. That's a no-no, disconnect from typedef always if you have too. I don't have conclusive evidence to back it up though.

     

    I've been using source separate since 2011 and it seems to work 95% of the time. If you use any of my group's libraries which have been updated recently (much of the stuff on ni.com/referencedesigns), you are also likely using source separate code.

     

    However because of other issues we've taken to using at least two classes or libraries to help resolve the situation you describe. One is responsible for any data or function which has to be shared between RT and the host. Then there is one class or library for the host, and one for RT. I think my colleague Burt S has already shown you some of the stuff which uses this with an editor (PC), configuration (shared API), and runtime (RT) classes. This pattern works really well for us and we've begun applying it to new projects (or refactoring ongoing projects to get closer to this, relieving many of our edit-time headaches). While there is still shared stuff, it all seems to update correctly and labview doesn't appear to have any sync issues.

     

    Where I have had annoying issues are with case sensitivity (kind of an odd situation but the long and short of it is that LabVIEW on windows ignores case errors while LabVIEW on RT rejects case issues -- but I haven't confirmed this is related to source sep) and with modifying execution settings (for example if you enable debugging you sometimes have to reboot the cRIO to take effect, even worse if you use any inlining). Both of these issues are documented.

  20. Option 3: Define each signal generation method into a separate class, several of which (corresponding to each type of measurement the device supports) are contained in the parent.  Then move the UI (it can be a simple FP with controls on it) into the object in question and tell the appropriate object from the parent to place their UI in a subpanel of your choice (This can be a much simpler approach than it sounds).

     

     

    ^^this is more or less what we've been doing with the configuration editor framework (http://www.ni.com/example/51881/en/) and its pretty effective -- CEF is a hierarchy, not containment, but the implementation is close enough. We've also found that for deployed systems (ie RT) we end up making 3 classes for every code module. The first class is the editor which contains all the UI related stuff which causes headaches if its anywhere near an RT system. The second is a config object responsible for ensuring the configuration is always a valid one as well as doing the to/from string, and the idea is that this class can be loaded into any target without heartache. The third is a runtime object which does whatever is necessary at runtime, but could cause headaches if its loaded up into memory on a windows machine. Using all three is kind of annoying (some boilerplate necessary) but theres a definite separation of responsibilities and for us its had a net positive effect.

     

    The other thing I've done with the above is to make a single UI responsible for a whole family of classes, like the filtering case here. Basically I store the extra parameters as key value pairs, and the various classes altogether implement an extra three methods. One says "what kvpairs do you need to run correctly", one says "does this kvpair look OK to you" (validation), and the third says "OK heres the kvpair, run with it" (runtime interpretation). This has worked pretty well for optional parameters like timeouts and such, which you wouldn't want to make a whole new UI for but are still valuable to expose. Also this obviously only works well if you (a) know good default values and (b) have some mechanism for a user configuring the system so that the computer doesn't have to be smart.

  21. The dongle solution is probably better but this walks through a similar solution just using the information we can pull from the cRIO SW.

    http://www.ni.com/example/30912/en/

    note this hasn't been updated in ages so the concepts still work but there might be easier ways to get the data, like the system config tool -- for example if you yank out serial number and eth0's MAC address you could be decently sure you have a unique identifier for the cRIO.

     

    You might also want to follow many of these steps, esp disabling the FTP or webdav server (step 9)

    http://www.ni.com/white-paper/13272/en/#toc9

  22. Yeah, XML was just a convenience but for some reason a lot of people get hung up on that. Sometimes I think it would have been better to include no serialization at all. I don't think I've ever used the XML format for anything. If I were selecting all over again in 2013, I'd pick the json parser instead (even if it is a bit of a pain to use for no real reason <_<).

     

    Anyway, the MGI files look nice, simple enough format. Good luck with the rest of the CVT.

  23. It's really a risk calculation. You have to anticipate what are the odds that down the road you'll want some feature (programmatic access? in place editing?) that globals can't easily provide. If you think you have a 10% chance you'll need that and it would take your 40 hours to pull out all your global and put in a better data type, you'd need to save 4 hours of development time to justify using a global. Maybe you should put in a fudge factor for unknown unknowns. So you'll want to save 40 hours of dev time to justify their use.

    For the use he described, what would be the superior alternative? I'm on board with what you're saying, generally, but in this case...

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.