Jump to content

JamesMc86

Members
  • Posts

    289
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by JamesMc86

  1. I'm not aware of specific examples of code compiling on one than the other. It depends on what causes the failure, it's possible on the same compiler to compile sometimes and not others depending on some of optimisations that are randomly chosen!

     

    That said if you have an active service contract NI is now giving free access to their cloud compile service which I think runs on the linux servers so it is easy to try with this and see if it is any more successful, however most likely you need to look at the reports that come out and identify whether size is the issue, or timing (which are the two main culprits).

  2. Hi All,

     

    OK, so the titles a bit vague as I have yet to figure out the correct terminology for this. I'm working on an app with a couple of areas at a high risk of change/versioning.

     

    In essence it comes from the fact that there is a central server and 100+ distributed nodes collecting data. As time goes on, features are added or changes are required on the nodes and we need the servers to be able to support the new version but also any older versions that are out there. Examples of what I mean are:

    • Communications protocols could change as the networks change and support improved methods.
    • File schemas almost certainly will change as firmware gets upgraded to capture different measurements or values.

    What I haven't managed to find yet is recommended design patterns for handling these changes. The obvious one is to take advantage of dynamic dispatching although I have concerns about the two obvious methods I can see of doing this:

    1. You can create an abstraction layer and have all version be a child of this, however we don't know which sections will change and could end up having to repeat code in each child, which leads to:
    2. v1 is the parent, v2 is a child of v1, v3 is a child of v2 and so on. This seems like the natural way but I have seen some concerns over performance of deep heirarchies and I may end up still fudging sub-versions to avoid creating a new layer for smaller changes.

    I'm simply curious, are there any known good patterns out there? How have you managed with similar problems in the past? (I'm certainly not going to be the first person!)

     

    Cheers,

    James

  3. The process here is that you only have one, deterministic, data copy that affects the acquisition

     

    This method may work well for you but just note a global variable is not deterministic, from LabVIEW help:

     

    Use global variables to access and pass small amounts of data between VIs, such as from a time-critical VI to a lower priority VI. Global variables can share data smaller than 32-bits, such as scalar data, between VIs deterministically. However, global variables of larger data types are shared resources that you must use carefully in a time-critical VI. If you use a global variable of a data type larger than 32-bits to pass data out of a time-critical VI, you must ensure that a lower priority VI reads the data before the time-critical VI attempts to write to the global again.
  4. Hi Alex,

     

    SUre it was a new feature in 2012 I think. The DVR is created in the RIO driver and is really intended to be fired straight into the TDMS function. The help page is at http://zone.ni.com/reference/en-XX/help/371599J-01/lvfpgahost/fpga_method_fifo_acqread/ but can't find much else. There is an example of it under Hardware Input Output>>FlexRIO>>High Throughput>>High Throughput Streaming.lvproj

     

    One thing I have never worked out is that everything references it as an "external" DVR but I never found any documentation about what this differentiation means. The one important comment is that it must be deleted before the driver can acquire more data, it must be this.

     

    Cheers,

    James

    • Like 1
  5. Hi Alex,

     

    I suspect it's not quite true but there are a couple of shortcuts.

     

    As long as you are reading data into your application there is CPU intervention then. There are a couple of techniques with TDMS where this doesn't happen, I don't think it is direct DMA but it is lower level:

    1. Using the DAQmx logging the data is logged in the DAQmx driver layer rather than your application, it is also logged as raw data with scaling information making it very fast.
    2. You can read a DVR from FPGA targets instead of data and there are corresponding write functions for TDMS which can mean better performance.

    One thing I would say for your situation either way is that TDMS files don't do great with being streamed at different rates, it keeps rewriting the header portion causing the file to be fragmented which means worse write performance and file sizes increase rapidly. You may be worth keeping them as different files anyway.

     

    Cheers,

    James

  6. I've always avoided Ubuntu as the LabVIEW installers uses rpm based installers but ubuntu uses debian packages.

     

    It is theoretically possible, from memory I think there is a program called Alien or similar that allows rpms to be installed to ubuntu and I know someone tried LabVIEW with this somewhat successfully. Alternatively any of the supported should work (Red Hat, OpenSUSE, Scientific Linux) or RPM based shouldn't have major issues. Believe I have had success in the past on Fedora, CentOS and Manjaro (Fedora and CentOS are closely related to RedHat, Manjaro was luck! but had some font issues)

  7. Hi,

     

    I'm working on a build server to be launched from Jenkins and a plugin to smooth the use of Jenkins with LabVIEW. I hope to post some results soon!

     

    One problem I am having. I wanted to distribute the build server as source code as it will enable additional features in the development environment over a built EXE. To aid the process, rather than distributing multiple versions and have to create a new distribution for every version.

     

    The problem with this is that when it tries to exit, unless it is in the version it was created in (2011) it prompts for a save. I'm sure I saw some option somewhere to sliently close LabVIEW without a save dialog but cannot find it anywhere!

     

    Was I dreaming or is there something in Scripting/Super Secret Stuff which could do this?

     

    Cheers,

    Mac

    • Like 1
  8. Hey Danny,

    As mentioned the data finder utilises data plugins to be able to understand file formats and schemas. Whilst csv defines the format you can still layout data in different ways internally so it needs a dataplugin to be able to understand your layout.

    I think the easiest way to create one would be to download a trial of diadem. This has a wizard for creating dataplugins for text and cvs formats and I believe you should be able to do this with the eval and then continue to use the dataplugin from LabVIEW (though I've not tested this)

    Cheers,

    James

  9. Instead of using the command line, you can use the "Open URL in Default Browser" VI in LabVIEW.

    You'll find this in the "Dialog and User interface"->"Help" palette (or in Quick drop).

     

    My new thing learned for the day, theres always something else I didn't know!

     

    I don't know the exact windows in's and out's but you need the cmd /c because the system exec is the equivalent of the command going into the run... dialog NOT the command line. cmd /c is what causes it to execute as if from the command line.

  10. The problem is a single location can only contain one version. If you want to control the software then you want to decide when the update happens. Even if your software is in separate sub folders you still have to repoint the system.If you have a seperate copy for each project you can just overwrite it with a newer version to upgrade it. It seems this might be a good case for packed project libraries as well.

  11. I believe Mike B in the UK branch has been working with pipes as well but I can see if it is online anyway. I'll point him to this to see if there is anything he can share.

     

    Out of interest, do these give you better bandwidth than using a localhost network adapter then?

  12. I don't believe there is one all encompassing "better" answer to this.

     

    This was a question I asked AQ and I often think back to his response. If you have your class API passing a reference then you are making the decision about the scope of access i.e. which functions are "atomic". If you want to call subsequent methods as an atomic operation you can't as each method will get and release the DVR. In this scenario keeping the object by value is better as the developer using the API can decide if he wants to use by ref or by value and can also decide what function calls are atomic and which are not.

     

    That said there are often cases where classes only make sense to be reference based. For example I am working on a class for a file API. In this case I use the DVRs in the API as this is how most people would expect a file to work.

     

    Thinking about the response from AQ now, I wonder whether the whole property node would be atomic, that would appear to be a sensible implementation although I have never tested it.

    • Like 1
  13. I would take the point that the line is drawn where it is for one reason, so we can express that AE is good and FGV is bad, it is to suit us rather than any actual difference in implementation.

     

    I think it might have been Nancy again who also said she saw regional dialects. Developers from one area of the US described them as AEs, other areas were FGVs, others where LV2 Globals. Just depends who teaches you! (Apologies if it was someone else that mentioned this)

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.