-
Posts
289 -
Joined
-
Last visited
-
Days Won
12
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by JamesMc86
-
-
I think the NI toolkit has some functions like FFTs pre-wrapped so you don't have to get into the C code for some standard operations
-
I think I've been seeing a similar behavior that the built in functions appear to leak a command object on an error though I haven't dug deep enough to confirm this yet.
-
I've been using LabVIEW Task Manager - http://lavag.org/topic/18322-cr-labview-task-manager/
Not sure how useful it will be to spot them though, good if you need to get at one you know is running but I think they will all remain at runsub state if loaded.
-
- Popular Post
- Popular Post
Yeah. The limiting factor is despite the event *sources* being able to be changed dynamically at run time, when it comes to the linkage between the registration refnum and the event structure proper the linkage is static. You can't drop in a new event type at run time and say when that event fires to call some method or event frame.
Compare that with an object approach where dynamic dispatching allows complete abstraction, the lower level loop and transport mechanism doesn't need to know anything about what it is ultimately routing.
It is possible to get around that by using event callbacks. It is intended for activeX and .net support but seems to work with internal LabVIEW events without issue. I used it to be able to set up dynamic binding for an MVC framework - http://www.wiresmithtech.com/mvc-in-labview-library/
In my case I'm not using OO to change the response but you could make the callback method a static parent method and then have a dynamic dispatch core VI inside (I don't think you could use dynamic dispatch directly).
- 3
-
Hi All,
OK, so the titles a bit vague as I have yet to figure out the correct terminology for this. I'm working on an app with a couple of areas at a high risk of change/versioning.
In essence it comes from the fact that there is a central server and 100+ distributed nodes collecting data. As time goes on, features are added or changes are required on the nodes and we need the servers to be able to support the new version but also any older versions that are out there. Examples of what I mean are:
- Communications protocols could change as the networks change and support improved methods.
- File schemas almost certainly will change as firmware gets upgraded to capture different measurements or values.
What I haven't managed to find yet is recommended design patterns for handling these changes. The obvious one is to take advantage of dynamic dispatching although I have concerns about the two obvious methods I can see of doing this:
- You can create an abstraction layer and have all version be a child of this, however we don't know which sections will change and could end up having to repeat code in each child, which leads to:
- v1 is the parent, v2 is a child of v1, v3 is a child of v2 and so on. This seems like the natural way but I have seen some concerns over performance of deep heirarchies and I may end up still fudging sub-versions to avoid creating a new layer for smaller changes.
I'm simply curious, are there any known good patterns out there? How have you managed with similar problems in the past? (I'm certainly not going to be the first person!)
Cheers,
James
-
Hey Lewis,
Bad luck, my functionality score was pretty low in the end as well which meant I only scraped it!
I didn't start from a sample project but I did generate one and nick a couple of useful subVIs, I think I used a watchdog from one to try and save time.
Cheers,
James
-
Congrats Neil! I also got mine through and passed, just!
-
The process here is that you only have one, deterministic, data copy that affects the acquisition
This method may work well for you but just note a global variable is not deterministic, from LabVIEW help:
Use global variables to access and pass small amounts of data between VIs, such as from a time-critical VI to a lower priority VI. Global variables can share data smaller than 32-bits, such as scalar data, between VIs deterministically. However, global variables of larger data types are shared resources that you must use carefully in a time-critical VI. If you use a global variable of a data type larger than 32-bits to pass data out of a time-critical VI, you must ensure that a lower priority VI reads the data before the time-critical VI attempts to write to the global again. -
Hi Alex,
SUre it was a new feature in 2012 I think. The DVR is created in the RIO driver and is really intended to be fired straight into the TDMS function. The help page is at http://zone.ni.com/reference/en-XX/help/371599J-01/lvfpgahost/fpga_method_fifo_acqread/ but can't find much else. There is an example of it under Hardware Input Output>>FlexRIO>>High Throughput>>High Throughput Streaming.lvproj
One thing I have never worked out is that everything references it as an "external" DVR but I never found any documentation about what this differentiation means. The one important comment is that it must be deleted before the driver can acquire more data, it must be this.
Cheers,
James
- 1
-
Hi Alex,
I suspect it's not quite true but there are a couple of shortcuts.
As long as you are reading data into your application there is CPU intervention then. There are a couple of techniques with TDMS where this doesn't happen, I don't think it is direct DMA but it is lower level:
- Using the DAQmx logging the data is logged in the DAQmx driver layer rather than your application, it is also logged as raw data with scaling information making it very fast.
- You can read a DVR from FPGA targets instead of data and there are corresponding write functions for TDMS which can mean better performance.
One thing I would say for your situation either way is that TDMS files don't do great with being streamed at different rates, it keeps rewriting the header portion causing the file to be fragmented which means worse write performance and file sizes increase rapidly. You may be worth keeping them as different files anyway.
Cheers,
James
-
I've always avoided Ubuntu as the LabVIEW installers uses rpm based installers but ubuntu uses debian packages.
It is theoretically possible, from memory I think there is a program called Alien or similar that allows rpms to be installed to ubuntu and I know someone tried LabVIEW with this somewhat successfully. Alternatively any of the supported should work (Red Hat, OpenSUSE, Scientific Linux) or RPM based shouldn't have major issues. Believe I have had success in the past on Fedora, CentOS and Manjaro (Fedora and CentOS are closely related to RedHat, Manjaro was luck! but had some font issues)
-
Hi,
I'm working on a build server to be launched from Jenkins and a plugin to smooth the use of Jenkins with LabVIEW. I hope to post some results soon!
One problem I am having. I wanted to distribute the build server as source code as it will enable additional features in the development environment over a built EXE. To aid the process, rather than distributing multiple versions and have to create a new distribution for every version.
The problem with this is that when it tries to exit, unless it is in the version it was created in (2011) it prompts for a save. I'm sure I saw some option somewhere to sliently close LabVIEW without a save dialog but cannot find it anywhere!
Was I dreaming or is there something in Scripting/Super Secret Stuff which could do this?
Cheers,
Mac
- 1
-
Hi,
I think your on the right lines. In reality you are always going to use a URL to access it anyway. I think this maybe required to make it easy to have hgweb automatically serve the different repositories as you create then anyway as you can use a wildcard to describe where all of the repositories are.
-
Welcome to LAVA!I come from the perspective that a DVR is a communication method in LabVIEW and therefore I would avoid designing my classes with DVRs unless it is implicit in the classes operation for example if the class represents a hardware session.
-
I'm not s familiar with C# but LabVIEW does have native support for .net controls. If you can compile it into this it should be possible but it depends on the execution flow of your program.
-
Hey Danny,
As mentioned the data finder utilises data plugins to be able to understand file formats and schemas. Whilst csv defines the format you can still layout data in different ways internally so it needs a dataplugin to be able to understand your layout.
I think the easiest way to create one would be to download a trial of diadem. This has a wizard for creating dataplugins for text and cvs formats and I believe you should be able to do this with the eval and then continue to use the dataplugin from LabVIEW (though I've not tested this)
Cheers,
James
-
Instead of using the command line, you can use the "Open URL in Default Browser" VI in LabVIEW.
You'll find this in the "Dialog and User interface"->"Help" palette (or in Quick drop).
My new thing learned for the day, theres always something else I didn't know!
I don't know the exact windows in's and out's but you need the cmd /c because the system exec is the equivalent of the command going into the run... dialog NOT the command line. cmd /c is what causes it to execute as if from the command line.
-
The problem is a single location can only contain one version. If you want to control the software then you want to decide when the update happens. Even if your software is in separate sub folders you still have to repoint the system.If you have a seperate copy for each project you can just overwrite it with a newer version to upgrade it. It seems this might be a good case for packed project libraries as well.
-
I believe Mike B in the UK branch has been working with pipes as well but I can see if it is online anyway. I'll point him to this to see if there is anything he can share.
Out of interest, do these give you better bandwidth than using a localhost network adapter then?
-
There is something on NI.com/labs that claims to do the most visited palette, not tried it yet but might be worth a look.I don't use quickdrop but I've always wanted a palette that shows (say 10 of) the "most used" VIs similar to how chrome shows the most visited web pages.- 1
-
Quick test shows that stacking the property nodes has the effect of putting all of the calls inside the in place structure so all calls are atomic.
Attached is the test code in 2012 if your curious. Just flip the disable structure to see the different effects.
- 1
-
I don't believe there is one all encompassing "better" answer to this.
This was a question I asked AQ and I often think back to his response. If you have your class API passing a reference then you are making the decision about the scope of access i.e. which functions are "atomic". If you want to call subsequent methods as an atomic operation you can't as each method will get and release the DVR. In this scenario keeping the object by value is better as the developer using the API can decide if he wants to use by ref or by value and can also decide what function calls are atomic and which are not.
That said there are often cases where classes only make sense to be reference based. For example I am working on a class for a file API. In this case I use the DVRs in the API as this is how most people would expect a file to work.
Thinking about the response from AQ now, I wonder whether the whole property node would be atomic, that would appear to be a sensible implementation although I have never tested it.
- 1
-
I would take the point that the line is drawn where it is for one reason, so we can express that AE is good and FGV is bad, it is to suit us rather than any actual difference in implementation.
I think it might have been Nancy again who also said she saw regional dialects. Developers from one area of the US described them as AEs, other areas were FGVs, others where LV2 Globals. Just depends who teaches you! (Apologies if it was someone else that mentioned this)
-
very fast - just barely slower than wiring directly to an indicator.
I'm guessing this means no switch to the UI thread, presumably similar performance to a local variable?
does compiling on Linux have a different probability of compiling FPGAs?
in Embedded
Posted
I'm not aware of specific examples of code compiling on one than the other. It depends on what causes the failure, it's possible on the same compiler to compile sometimes and not others depending on some of optimisations that are randomly chosen!
That said if you have an active service contract NI is now giving free access to their cloud compile service which I think runs on the linux servers so it is easy to try with this and see if it is any more successful, however most likely you need to look at the reports that come out and identify whether size is the issue, or timing (which are the two main culprits).