Jump to content

JamesMc86

Members
  • Posts

    289
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by JamesMc86

  1. It's also worth pointing out they are high impedance the rest of time, not the opposite.
  2. I would take a look at using images from the RAD (formerly RTAD) tool at http://zone.ni.com/devzone/cda/epd/p/id/5986 This allows you to create fully working images of the controllers which guarantees all your versions match. It is also fully open source so you can customise it to make it more specific to your purpose. Failing that you can use the system configuration API to install components to an RT system if you really want to do individual components.
  3. The implication is though that LabVIEW manages the implicit references but you manage the Controls[] references (by closing them or a parent)
  4. Do you need an FFT that large? The reason I ask is that the figures you discuss are very large for an FFT, normally for a long time period we break it down into smaller chunks to FFT to see things change over time over the data set. I would avoid the express VI, on these data sizes you need to avoid any data conversions which the express VI will cause. Between the other two I'm not sure of different advantages. If you are doing sound and vibration type analysis then I would use this as the results should easily feed into the other functions. To avoid the licensing of the toolkit though you could use the built in function. There is another option but it is another toolkit which has some high performance functions to perform the FFT in GPU or multicore optimised to improve the performance if it becomes necessary (it can also perform the FFT on SGL data as opposed to DBL)
  5. Even having read the book #lifeofpi is amazing!

  6. I honestly am not sure if it will be possible with that amount of data points. Here are some tips that may get the code to run but even then you will find it will probably become very sluggish as LabVIEW has to process 100M points every time it has to redraw the graph, even if you don't, LabVIEW has to decimate the data as it only has 100-1000 pixels that it can use to plot the data. 1. Loading from a binary file is better than text because text has to be converted meaning two copies of the data. If you have text load it a section at a time into a preallocated array (you will have to be very careful about allocations). 2. Use SGL representation, the default in LabVIEW is normally DBL for floating point but single only uses 4 bytes per point. 3. By default on a 32 bit OS LabVIEW has 2GB of virtual memory it can use (hence the problems, in a SGL format each copy of data you have uses 20% of this). If you are on a 32 bit OS enable the 3GB flag so it can use 3GB instead (there is a KB on the NI site for this). Or moving to a 64 bit OS with 32 bit LabVIEW will give it 4GB. The ultimate would be to use 64 bit LabVIEW but you tend to hit limitations of supported tool kits so I tend to suggest this as a last resort when the memory sizes can be avoided through programming. On top of these you just have to be very careful that any data manipulation you do does not require a data copy. That is how you try and avoid running out if memory but I would still suggest trying some of the other methods that Shaun and I have suggested. Even if you can get this to run, the programming will be a little easier but the program is going to have poor performance with that much data and is always going to be on the brink, at any point you could add some feature which needs more memory and you are back to square one.
  7. Just did a quick test but seems much slower! Attached is the mathscript node if anyone can spot if I have done something wrong. edit: some changes where it needs to be vector functions instead of matrix functions '. In front of the operator' brought it it 12x slower. I will have another look later to see if there is anything else I missed but the primitives are still looking pretty good evalSugenoFnodeDBL.vi
  8. Do you have access to the math script node? It would be interesting to know how that performs by comparison (and may try it myself when I get time). I believe it still falls a little short of pure G implementations but as the flagship textual math node now I believe it would be much closer to it than the formula node.
  9. To decimate loop on single values by an incremental value. Or for a proper display you still need to load a whole chunk to use an sk filter or similar to display correctly. If you just want max or min in a section that's where SQLite works nicely, but there is a single function to get min max of array anyway. I've written an example of the sort of thing you need to do (but not from file) at https://decibel.ni.com/content/docs/DOC-24017 The advantage of any of the methods is that you don't have to load the whole file thus removing the memory issue, you just load the section you need. That said all these methods depend on you not loading the whole data set at once. The fundamental issue is having the whole data set in memory at once (null)
  10. If you had complex record types I would agree but this is just straight numeric data. A binary file is not that hard to work with and gives high performance random access with a smaller footprint than databases because it doesn't have all the extra functionality we are not using and returns the data directly in the correct type, no conversion necessary which is going to hit you on large data sets (and stress your memory more!). TDMS is maybe a better option again for having an easier API but should give performance similar to the binary file. I believe TDMS and HDF5 should give a similar performance as they are both binary formats, but I have not worked with it directly myself. For the conversion you are probably going to have to load the existing file in pieces and write them back to whatever other format you go with. The hard thing is knowing where the chunks are as potentially (depending on your format) each entry could be a different size. There is a read multiple rows option on the built in read from text file which is probably the best way to break it down (right-click>> Read Lines on Read text file).
  11. The advice for acquiring the data sounds good. Pulling the data in as chuncks for parsing and placing into preallocated arrays will keep things memory efficient. The problem is that 100 million points is always going to cause you issues having it in memory at once. You also will find if you try and write this to a graph this requires a separate copy of the data so this is going to cause issues again. I think you are going to have to buffer to disk to achieve this. You can do it as a database but I would be just as tempted to just put it to a binary file when you have a simple array. You can then easily access specific sets of elements from the binary file (you cannot do this easily with a text file) very efficiently. I think for the graph you are going to have to determine the best way to deal with this. You are probably going to have to decimate the data into it and then allow people to load more detail of the specific area of interest to minimise the data in memory at any given time.
  12. Hi All, There was some talk at the BBQ of some revamp work of the LabVIEW Wiki and want to volunteer my services. What I wanted to understand is what do we prioritise, are we just trying to repeat the LabVIEW help, or keep tips and tricks or some combination of the two? Either way I might try and start on some of the stubs. Cheers, James
  13. Do you see the same thing if you just put a linear sequence of numbers into the sine primitive? I can't find any known issues that describes this behavior, I would suggest calling your NI branch and reporting it, we should at least get it documented even if it has been fixed in the latest versions.
  14. Ah probably should have clicked through another myself! Pleasd it has the info though
  15. I have been having a look at this but find no specifics on what it reports but I suspect it is the same as the IDE, internal errors to the RTE. When you build the installer in LabVIEW you should have the option to include it or not but obviously you cannot in this case. If it is a concern though it is easy to disable with an ini token or just disabling the service as described in http://digital.ni.com/public.nsf/allkb/368F508D3FAE0242862578D400770616?OpenDocument
  16. Hi, The short answer is no. LabVIEW for Linux is distributed as a binary so you won't be able to target it to Linux on x86. LabVIEW RT for custom targets the x86 architecture (and then only specific chipsets). There is the LabVIEW for ARM module but it requires programming thought the Keil uVision tool chain which I suspect you can't (or defeats the object of a board such as the gumstix) so I doubt it will give the experience you are hoping for. I would love to see this change but it is the situation as it stands.
  17. Small disclaimer, I'm an engineer and not a computer scientist so I may be misunderstanding what you mean but here is my understanding: The model of computation is how the software executes. In 'G' this is the dataflow paradigm. I would suggest that OOP or Actor oriented programming is a higher level than that. It is a means of design rather than execution and so we can use OOP with dataflow or other paradigms. I am intrigued though about what principles you think would be useful, can you suggest any sites that discuss these?
  18. Hi Daklu, To answer your questions in more of an order of execution: Open FPGA VI Reference This VI can either connect to an existing running FPGA VI, if it is already running, or download the bitfile it is linked to if it is not. Whichever ever linking mechanism you use the runtime behaviour is the same. Linking to a bitfile will always work but linking to a build specification or VI will query the project at edit time for which bitfile is the correct one to use (or break the VI if it still requires compilation). I have not had a use for the other options for run when loaded yet! I always stick to the open reference (which I think is what will take precendence when you run your RT VI anyway, the others must be something edit time related I think). In the open reference VI, if run when loaded is selected the FPGA VI will immediately start. If unselected it is not started until you manual start it using an invoke node. This can be used to set intial register values before the code starting. If the VI is already running through some other means (such as it is already running), these functions will return a warning at run time. Close Reference Your problem in 2 is probably related to the Close FPGA reference VI. If you right click you have an option to close or by default close and reset. This means the FPGA VI is reset (read aborted in standard LV speak) when we close the reference. If you want it to continue you should change this to just close. FPGA Image Deployment If you want the FPGA to run continuously independent of any RT code you either need a piece of RT code that you use to deploy it initially by opening a reference and closing it without the reset. Or you can actually flash the VI to the FPGA using the RIO Device Setup application which will be in your start menu. This will even cause the VI to persist over power cycles as well. FPGA Top Level Re-entrancy Any FPGA VI is re-entrant by default, this makes the most sense more of the time on FPGA. For the top level VI though it will make no difference as you can only ever run one top level VI on the FPGA at a time. As it is a VI though this is just going to apply if you wanted to call it as a subVI. I hope this helps clarify a few points and I think covers your questions. Cheers, James
  19. You appear to be able to use a global variable in the same way in a class, it doesn't create directly under the class but you can add it and set the scope to private.
  20. Thanks for all your suggestions. I think I will probably go with the OOP mechanism for the challenge as much as anything! It does look like the DOM parser could be used in this way (I like the idea of SQLite as well, I'm going to be playing with this in another part of the project anyway, I think this can just operate in memory though) but I think doing it in OOP will keep it more reusable and portable for the future. EasyXML is one I have heard of but in this case XML is only a small part of what I need, I also just want an easy way to deal with a tree structure within the application.
  21. Its actually not that one but you get three or four errors the last of which mentions bool. I have seen similar errors before, there is a bug in 2010 where if you have FPGA nodes and remote debug an RTEXE these get thrown and from that I know from experimenting from that each message will correspond to one node causing the issue (in that case it corresponded to the number of read/write nodes).
  22. Hi, I am working on something that I was looking to see if anyone had any experience on. I am working on an application which has a need for a heirarchal data structure in a couple of places. The main one being a representation of a file structure (not one that actually exists on disk) but also a couple of others that will end up being generated XML files. I am torn between two key options: 1. Impelent in a pure LabVIEW method (there is an example I would start from on the NI Community). I would be tempted to have a couple of options, one that is a pure linked list style implementation. One that also uses an variant dictionary to track the URLs of nodes to make a URL based lookup a constant time function (but will slow modifications to the existing tree). 2. Use the Microsoft DOM interface (.net based I think in LabVIEW but could be activeX). Whilst this is designed to used for files you I think you can modify it purely in memory and would mean my file writing routines are done). I am not fussed about portability (this will only run on windows) but I am concerned about the performance. While my trees will be quite small it may not perform well in memory. Or is the performance good and not worth the effort of the first implementation? Has anyone come accross a similar problem and have any advice on these approaches or have alternative approaches?
  23. I'm also kind of on board. Certainly for the first level there seems to be little benefit and much complexity added. Another way you could read this stage is that the type here is a property of the first message rather than a unique class of message. For the second stage it seems there could be more to gain but not huge amounts. You would add some scalability at your read message stage but in just that area for the extra complexity that propagates throughout the code. (I would get frustrated if I had to dig through two layers of delegates each time I wanted to read the code that was actually running) My 2c, I think the suggestions made look good if you wanted to dispatch it but I think (with the limited scope of what you have described here) it could be unnecessarily complex
  24. The last error referring to boolean data gives away that this is related to the conditional terminals on the outputs. Remove these and it works fine, going back to the old ways on these may solve the issue in this case (and give you better performance, the current performance is equivalent to build array in a case structure).
  25. The example is the sample project for the actor framework. Open the new create project wizard from the splash screen and load either the actor framework template project or the evaporative cooler project (an example of how to use the template) and run these up. You will see the splash screen that AQ was referring to.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.