Jump to content

JamesMc86

Members
  • Posts

    289
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by JamesMc86

  1. Copying what you find "in the wild" is never theft. License notifications in the software notwithstanding, as the copier hasn't, in fact, agreed to anything
    That is fundamentally not the way copyright works! All software you write is covered by copy right as soon as you write it whether you write any copyright notice or not. By default this means anyone else cannot use it. A license attached then permits you to use it where copyright would have prevented you. It all comes back to copyright law.

    Now if you find a piece of software in the wild with no license, no author details then it is probably very hard for the author to prove it is his and defend the case but that doesn't necessarily make it right.

    (That is not to say I disagree with your point, if it is out there without a license that is probably the way the author intended it to be used, but legally there is no right there)

  2. PS. I was particularly annoyed by a comment of "Architecture not abstracted to switch over to field hardware.” I used abstract parent classes, with “Simulated” child classes injected at runtime, and a large comment explaining the intention to replace the simulated modules with hardware ones as required.

    I seem to think I got a similar comment on mine using the same sort of technique though previous comments on here suggested the OOP is acceptable for the CLA, just apparently not useful design patterns! Still passed so can't complain.

  3. You could share it with NI and help everyone that uses the feature, you would probably help more people!

    Well I don't think anyone you help is exactly going to go trying to get you in trouble are they. But then it might not be their software and then you are aiding IP theft.

    As I say, I don't know exactly what could happen but I'd say there are as many people who stand to get hurt from it as benefit, just because you say you would be ethical with it, I've heard there are some unscrupulous folk on this Internet (though I don't believe there are many wandering the LAVA corner of the net)

  4. I wouldn't try to make a profit off somebody else's block diagram without their permission, password protected or otherwise, but I don't see anything wrong with messing around with it, as nobody is any worse off than they were before.

     

    Whilst I agree in principle that no harm is done there is a reason why they have password protected it and you have therefore gone against their wishes without know why they have put the password on. It is also hard to prove if you work on something similar that you have not taken inspiration from their IP (and there is little legal protection against this I believe). There is a reason why we have curtains on our bedroom windows  ;)

     

    I can't speak on behalf of the organisation regarding what would happen if you posted it, but it would certainly appear to be in breach of your license:

     

     

    Restrictions. You may not: (i) reverse engineer, decompile, or disassemble the SOFTWARE (except to the extent 
    such foregoing restriction is expressly prohibited by applicable law); (ii) use the SOFTWARE to gain access to unencrypted data in a manner that defeats the digital content protection provided in the SOFTWARE;

    As for the consequences I don't know exaclty, presumably you could be at risk of being sued, especially for disseminating the information. If your employer owns the license they probably wouldn't be to impressed!

  5. The other thing to consider is, right or wrong, there are a number of people using this feature out there already. (I don't know numbers or anything but I have certainly seen it). I don't think they would be impressed in NI turned around and said its been compromised but we aren't going to attempt to improve the protection you are using on your IP. I do always recommend removing the diagram if it is highly sensitive though.

  6. I would take a look at using images from the RAD (formerly RTAD) tool at http://zone.ni.com/devzone/cda/epd/p/id/5986

    This allows you to create fully working images of the controllers which guarantees all your versions match. It is also fully open source so you can customise it to make it more specific to your purpose.

    Failing that you can use the system configuration API to install components to an RT system if you really want to do individual components.

  7. There's nothing special about the refs obtained from the Controls[] property that makes them different than a regular control reference.  Each control has a unique refnum which will be returned whenever that control is referenced, regardless of how that refnum is obtained (direct reference, controls[] property, etc).  As a result, no need to close those references. 
    The implication is though that LabVIEW manages the implicit references but you manage the Controls[] references (by closing them or a parent)
  8. Do you need an FFT that large? The reason I ask is that the figures you discuss are very large for an FFT, normally for a long time period we break it down into smaller chunks to FFT to see things change over time over the data set.

    I would avoid the express VI, on these data sizes you need to avoid any data conversions which the express VI will cause. Between the other two I'm not sure of different advantages. If you are doing sound and vibration type analysis then I would use this as the results should easily feed into the other functions. To avoid the licensing of the toolkit though you could use the built in function.

    There is another option but it is another toolkit which has some high performance functions to perform the FFT in GPU or multicore optimised to improve the performance if it becomes necessary (it can also perform the FFT on SGL data as opposed to DBL)

  9. I honestly am not sure if it will be possible with that amount of data points. Here are some tips that may get the code to run but even then you will find it will probably become very sluggish as LabVIEW has to process 100M points every time it has to redraw the graph, even if you don't, LabVIEW has to decimate the data as it only has 100-1000 pixels that it can use to plot the data.

    1. Loading from a binary file is better than text because text has to be converted meaning two copies of the data. If you have text load it a section at a time into a preallocated array (you will have to be very careful about allocations).

    2. Use SGL representation, the default in LabVIEW is normally DBL for floating point but single only uses 4 bytes per point.

    3. By default on a 32 bit OS LabVIEW has 2GB of virtual memory it can use (hence the problems, in a SGL format each copy of data you have uses 20% of this). If you are on a 32 bit OS enable the 3GB flag so it can use 3GB instead (there is a KB on the NI site for this). Or moving to a 64 bit OS with 32 bit LabVIEW will give it 4GB. The ultimate would be to use 64 bit LabVIEW but you tend to hit limitations of supported tool kits so I tend to suggest this as a last resort when the memory sizes can be avoided through programming.

    On top of these you just have to be very careful that any data manipulation you do does not require a data copy.

    That is how you try and avoid running out if memory but I would still suggest trying some of the other methods that Shaun and I have suggested. Even if you can get this to run, the programming will be a little easier but the program is going to have poor performance with that much data and is always going to be on the brink, at any point you could add some feature which needs more memory and you are back to square one.

    • Like 1
  10. Just did a quick test but seems much slower! Attached is the mathscript node if anyone can spot if I have done something wrong.

    edit: some changes where it needs to be vector functions instead of matrix functions '. In front of the operator' brought it it 12x slower. I will have another look later to see if there is anything else I missed but the primitives are still looking pretty good

    evalSugenoFnodeDBL.vi

  11. To decimate loop on single values by an incremental value. Or for a proper display you still need to load a whole chunk to use an sk filter or similar to display correctly. If you just want max or min in a section that's where SQLite works nicely, but there is a single function to get min max of array anyway. I've written an example of the sort of thing you need to do (but not from file) at https://decibel.ni.com/content/docs/DOC-24017

    The advantage of any of the methods is that you don't have to load the whole file thus removing the memory issue, you just load the section you need. That said all these methods depend on you not loading the whole data set at once. The fundamental issue is having the whole data set in memory at once

    (null)

  12. You'll end up writing a shed-load of code that realises your own bespoke pseudo database/file format that's not quite as good and fighting memory constraints everywhere

    If you had complex record types I would agree but this is just straight numeric data. A binary file is not that hard to work with and gives high performance random access with a smaller footprint than databases because it doesn't have all the extra functionality we are not using and returns the data directly in the correct type, no conversion necessary which is going to hit you on large data sets (and stress your memory more!). TDMS is maybe a better option again for having an easier API but should give performance similar to the binary file.

    post-18067-0-96200400-1356255497.png

    I believe TDMS and HDF5 should give a similar performance as they are both binary formats, but I have not worked with it directly myself.

    For the conversion you are probably going to have to load the existing file in pieces and write them back to whatever other format you go with. The hard thing is knowing where the chunks are as potentially (depending on your format) each entry could be a different size. There is a read multiple rows option on the built in read from text file which is probably the best way to break it down (right-click>> Read Lines on Read text file).

  13. The advice for acquiring the data sounds good. Pulling the data in as chuncks for parsing and placing into preallocated arrays will keep things memory efficient.

    The problem is that 100 million points is always going to cause you issues having it in memory at once. You also will find if you try and write this to a graph this requires a separate copy of the data so this is going to cause issues again.

    I think you are going to have to buffer to disk to achieve this. You can do it as a database but I would be just as tempted to just put it to a binary file when you have a simple array. You can then easily access specific sets of elements from the binary file (you cannot do this easily with a text file) very efficiently. I think for the graph you are going to have to determine the best way to deal with this. You are probably going to have to decimate the data into it and then allow people to load more detail of the specific area of interest to minimise the data in memory at any given time.

  14. I have been having a look at this but find no specifics on what it reports but I suspect it is the same as the IDE, internal errors to the RTE.

    When you build the installer in LabVIEW you should have the option to include it or not but obviously you cannot in this case. If it is a concern though it is easy to disable with an ini token or just disabling the service as described in http://digital.ni.com/public.nsf/allkb/368F508D3FAE0242862578D400770616?OpenDocument

  15. Hi,

    The short answer is no. LabVIEW for Linux is distributed as a binary so you won't be able to target it to Linux on x86. LabVIEW RT for custom targets the x86 architecture (and then only specific chipsets).

    There is the LabVIEW for ARM module but it requires programming thought the Keil uVision tool chain which I suspect you can't (or defeats the object of a board such as the gumstix) so I doubt it will give the experience you are hoping for. I would love to see this change but it is the situation as it stands.

  16. Small disclaimer, I'm an engineer and not a computer scientist so I may be misunderstanding what you mean but here is my understanding:

    The model of computation is how the software executes. In 'G' this is the dataflow paradigm. I would suggest that OOP or Actor oriented programming is a higher level than that. It is a means of design rather than execution and so we can use OOP with dataflow or other paradigms.

    I am intrigued though about what principles you think would be useful, can you suggest any sites that discuss these?

  17. Hi Daklu,

    To answer your questions in more of an order of execution:

    Open FPGA VI Reference

    This VI can either connect to an existing running FPGA VI, if it is already running, or download the bitfile it is linked to if it is not. Whichever ever linking mechanism you use the runtime behaviour is the same. Linking to a bitfile will always work but linking to a build specification or VI will query the project at edit time for which bitfile is the correct one to use (or break the VI if it still requires compilation).

    I have not had a use for the other options for run when loaded yet! I always stick to the open reference (which I think is what will take precendence when you run your RT VI anyway, the others must be something edit time related I think). In the open reference VI, if run when loaded is selected the FPGA VI will immediately start. If unselected it is not started until you manual start it using an invoke node. This can be used to set intial register values before the code starting. If the VI is already running through some other means (such as it is already running), these functions will return a warning at run time.

    Close Reference

    Your problem in 2 is probably related to the Close FPGA reference VI. If you right click you have an option to close or by default close and reset. This means the FPGA VI is reset (read aborted in standard LV speak) when we close the reference. If you want it to continue you should change this to just close.

    FPGA Image Deployment

    If you want the FPGA to run continuously independent of any RT code you either need a piece of RT code that you use to deploy it initially by opening a reference and closing it without the reset. Or you can actually flash the VI to the FPGA using the RIO Device Setup application which will be in your start menu. This will even cause the VI to persist over power cycles as well.

    FPGA Top Level Re-entrancy

    Any FPGA VI is re-entrant by default, this makes the most sense more of the time on FPGA. For the top level VI though it will make no difference as you can only ever run one top level VI on the FPGA at a time. As it is a VI though this is just going to apply if you wanted to call it as a subVI.

    I hope this helps clarify a few points and I think covers your questions.

    Cheers,

    James

    • Like 2
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.