Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by GregSands

  1. Ernest Hemingway provides a one-line answer to the question in the title: Gradually, then suddenly.
  2. Two suggestions if you haven't tried them already: Multicore Analysis and Sparse Matrix Toolkit GPU Toolkit
  3. Does it help to re-ask the question as "where should LabVIEW have a future?" It is not difficult to name a number of capabilities (some already stated here) that are extremely useful to anyone collecting or analyzing data that are either unique, or much simpler, using LabVIEW. They're often taken for granted and we forget how significant they are and how much power they unlock. For example (and others can add more): FPGA - much easier than any text-based FPGA programming, and so powerful to have deterministic computational access to the raw data stream Machine vision - especially combined with a card like the 1473R, though it's falling behind without CoaXPress Units - yes no-one uses them , but they can extend strict programming to validation of correct algorithm implementation Parallel and multi-threaded programming - is there any language as simple for constructing parallel code? Not to mention natural array computations Real-time programming Data-flow - a coherent way of treating data as the main object of interest, fundamental, yet a near-unique programming paradigm with many advantages and all integrated into a single programming environment where all the compilation and optimization is handled under the hood (with almost enough ability to tweak that) Unfortunately NI appear to be backing away from many of these strengths, and other features have been vastly overtaken (image processing has hardly been developed in the last 10 years, GUI design got sidetracked into NXG unfortunately). But the combination of low-level control in a very high-level language seems far too powerful and useful to have no future at all.
  4. Thanks - I hadn't even thought of using multiple queues in this way, but that makes a lot of sense. I should be able to structure it like this fairly easily.
  5. The Parallel For Loop is perfect for parallel processing of an input array, and reassembling the results in the correct order, however this only works if the array is available before the loop starts. There is no equivalent "Parallel While Loop" which might process a data stream - so what is the best architecture for doing this? In my case, I'm streaming image data from a camera via FPGA, acquiring 1MB every ~5ms - call this a "chunk" of data - and I know I will acquire N chunks (N could be 1000 or more). I then want to process (compress) this data before writing to disk. The compression time varies, but is longer than the acquisition time. So I'd like to have a group of tasks which will each take chunks and return the results - however it's no longer guaranteed that the results are in the same order, so there's a bit of housekeeping to handle that. I have a workable architecture using channels, but I'd be interested in any better options. Easiest to explain with a simplified code which mimics the real program: It requires the processing to use a Messenger channel (i.e. Queue) because a Stream channel cannot work in a Parallel For Loop, but this doesn't maintain order. And the reordering is a little messy - perhaps could be tidied using Maps but I don't have 2019 at the moment. The full image is too large to keep in memory (I'm restricted to 32-bit because the acquisition is from an FPGA card), so I need to process and write the data as it becomes available. I've considered writing a separate file for each chunk, but writing millions of small files a day is not particularly efficient or sustainable. Is there a better approach? Have I missed something? I feel like this must be a solved problem, but I haven't come across an equivalent example. Could there be a Parallel Stream Channel which maintains ordering, or a Parallel While Loop which handles a defined number of tasks? Thanks. Greg
  6. Just to say that I have also had the same issue for quite some time. Several months ago I sent a message using the Contact Us link at the bottom of the website, but have not had a response.
  7. It appears that Arrays of Enums are not handled properly, or at least not in the way I expect or would like! See the attached image for details - using JSONtext on LV 2018.
  8. I've not used the PCIe-1477, but have been using the earlier PCIe-1473 - different FPGA chip but I presume the coding is similar. If you want to code the FPGA directly, rather than using the IMAQ routines, have a look at examples such as this one, which also show how to write to/from the CameraLink serial lines. However, as @Antoine Chalons says, you do need to know the specific commands for your camera.
  9. Just to add to that, the bolded titles remain even though there were no unread posts showing in "Unread Content". However I just clicked "Mark site read", and the bold has disappeared.
  10. Using Firefox/Windows I also have several forums left bolded after reading all posts.
  11. You might also try right-clicking the cluster, and looking at Advanced/Show Hidden Element to see whether there might be controls in the cluster that are hidden. But ensegre's suggestion of copying across to a new cluster is probably easiest.
  12. So this gets a little more interesting with the output type of the DDS: 1.Following directly with a VIM causes the output to back-propagate from the VIM's default input type. 2.This does not happen if the Types Must Match is used directly, even though this is essentially the contents of the VIM. 3. Wrapping a sequence around either the DDS or the VIM causes the type to be defined correctly. 4. Putting the DDS inside its own VIM also solves the problem, but only if there is also a sequence wrapping the DDS inside - if not, then the output type from the DDS VIM is always its own default output type. In any case, here's the Default Element VIM (saved for 2012) for any who might use it. Default Element.vim
  13. Oh, very nice! I'd not wanted to use Reshape Array because of the memory re-allocation, but I didn't think of using it in a Diagram Disable Structure. if I ever meet you in person...
  14. Does anyone know of a way to create a single (default) element of an arbitrary-dimension array? I'm trying to create some Malleable VIs which have the same code for 1D-3D arrays, but have different code for floating-point vs integer arrays. A second possible use in Malleable VIs would be to Initialize a new array based on an input array. Any thoughts from anyone?
  15. Firstly, you are using a Formula Node, not a MathScript Node. But that will do what you need just fine. Secondly, look at the built-in help to explain the operators. Right click on the Formula Node, then Help, and then look for the allowed operators. You'll see the ones you need, including >> (right shift), << (left shift), & (and), ^ (exclusive or), and | (or). Note that ^ is not "to the power of". That should make completing this fairly straight-forward.
  16. This probably won't help you, but you should be using HDF5 files - it can do exactly this. H stands for Hierarchical, and it is quite straightforward to write data to multiple files, and create a "master" file which transparently links them. That works for writing as well as reading, so you can create the master file at the start, and write data to that which will be stored in separate files, or create it after writing individual files. The HDF5 library handles all of the connection, and can be as simple as I said or far more complex if needed.
  17. Not sure what this has to do with LabVIEW - perhaps you're better to ask on the Grafana website.
  18. I've started trying to use .vim files a little, and have a couple of questions: 1. Looks like a typedef doesn't match with the same structure without a typedef, correct? I would have thought that if the internal data matches, it should be ok. 2. I get this interesting mismatch between an array and a sub-array: But if the connection is the other way around, I get this error: I would have hoped that either (or both) would be ok. I'm not in the beta, so don't know if this has already changed, or is likely to?
  19. Sounds reasonable. Even NI only supports four versions for some of its modules (e.g. Vision Development Module), so I've finally jumped from 2012 to 2017. Is there a way to keep the current install available in VIPM if using LabVIEW <2013?
  20. My first thought - I wonder whether one of the 3D Graph (I find the old NI 3D Graphs a little easier than the rewrite) or 3D Picture controls might work - unfortunately there seems like there might be a fair bit more capability hidden out of sight than can be easily accessed. At the very least you might be able to appear like a bar has two different colours by an appropriate 3D Bar Plot, even if you don't get the gradient in between. 3D Picture controls are based on OpenGL, so theoretically it should be easy to do what you want there, but I've never had much success doing anything complex with 3D Pictures.
  21. Yes, that's what I've been doing, and it's ok, though suboptimal as you say. I'm looking forward to seeing the development of Malleable VIs - I've held off on trying to convert some of my XNodes to VIMs, but if things stay fairly stable, I'll have to give it a go.
  22. We've lost something useful in the "official release" of Malleable VIs (.vim files aka VI Macros) in LabVIEW 2017. In previous versions, because VIMs were built around XNodes, then you could right-click the XNodeWizardMenu to look at the Generated Code given a particular wiring. There's no such option in 2017, even with the appropriate LabVIEW.ini keys. Is there another ini key that provides a similar functionality again? I find it a useful check that the VIM is coded correctly. The closest is to "Convert Instance VI to Standard VI", however that removes the VIM.
  23. Probably easiest to use the PlotImages.Front property, which allows you to create a picture with all the lines and text, and overlay it on the Intensity Graph. Also, if you look in the Classic Graph controls there is a Polar Plot control which contains some subVIs for drawing the polar diagram that might be a good starting point.
  24. One approach that is similar to, but a little more robust than, your original method is to use the Peak Detection vi which fits a quadratic to the data, and returns a fractional index. Here I've used it just to shift each waveform so that the peak is at zero, but you could use the shift information in different ways. Here's a noisy signal, and increasing the width of the fit seems to cope with it ok.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.