Jump to content

Götz Becker

Members
  • Posts

    135
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by Götz Becker

  1. QUOTE (ygauthier @ May 7 2008, 03:44 PM)

    LabVIEW 8.5.1 on Windows XP Service Pack 3?

    Hi,

    I installed SP3 this morning. Everthing went ok, LV 8.5.1 is running but I haven´t tested any DAQmx stuff. The update took some time (30min?) (gave me some nice time for a little codereview (nitpicking) with a coworker :laugh: )

    just my 5ct

  2. QUOTE (Doon @ Apr 16 2008, 05:58 PM)

    Even More reason for me to push (read: beg) for an upgrade. Thanks.

    After I today nearly released a build with some parts configured by conditional symbols as simulated, I am too begging for this to be included soon.

    Reducing the overhead of things to remember when the customer is standing in your neck asking if the app is ready yet (answer: no not yet but soon...) would be great!

  3. QUOTE (cmay @ May 6 2008, 12:39 AM)

    Continuing on this topic... is there a property that tells which type of numeric the control is (i.e. whether it's an I32, DBL, etc.)?

    Hi,

    I would try and play a little with GetClusterInfo.vi, GetArrayInfo.vi and GetTypeInfo.vi found in <vilib>/Utility/VariantDataType/VariantType.lvlib

    The VIs in there aren´t documented very well, but you should be able to get the needed information with those out. (Beware... trying to traverse a complex structure programmatically can be tricky :wacko: )

  4. QUOTE (TiT @ Apr 1 2008, 12:34 PM)

    I saw that your methods all have control terminals inside of the error case structue. This might not change a lot, but if I understand correctly the quote above, then it can only be better to have the control terminals outside of the case structure.

    Hope this can help

    Hi,

    thanks for the link. I didn´t thought about the controls inside the case structures. Usually I only look for nested indicators and the dataflow of "passed-through" data like references.

    I´ll try the hint and hope for the best :rolleyes:

  5. Hi and thanks for your replies,

    I still don´t know why my memory consumption grows.

    The used code is in the attachement Download File:post-1037-1207038045.zip

    The queue with the waveforms is limited to 500 elements and the producer VI, which writes random data into the Q, has a max Wfm length of 5000 (DBL Values). So the Q size in memory should max out at about 20MB.

    Perhaps someone has an idea where all my memory is used.

    Edit:

    Sorry I didn´t make it clear which is the Main VI (Q_Get_Write_Copy.vi)

    post-1037-1207056596.png?width=400

  6. Hi,

    I am running small testapps to check design ideas for a bigger project if they run without memory leaks.

    My current app sends different length waveforms to a save routine using queues which writes them into a TDMS-file.

    I understand that the way I do this could lead to memory fragmentation. The RT System Manager showed this:

    post-1037-1206978355.png?width=400

    After startup

    post-1037-1206978369.png?width=400

    after 6 days running

    post-1037-1206978378.png?width=400

    after 11 days running

    The main question for me is, when does the runtime will free some memory again.

    Does it only kick in if a threshold of available memory is under-run? Which would require a different test application to produce cyclic memory shortages or so.

    Greetings from a cloudy Munich

  7. QUOTE (neB @ Mar 12 2008, 11:48 AM)

    Benchmark regularly as you go.

    I ran a small testapp for several hours yesterday that had some loops and various instances of GetTypeInfo. Not very complex though, but it showed good results with steady memory statistics. If I find something in the real app I´ll report it.

  8. Hi and thank you all for your replies.

    The target platform will be PXI RT-Controllers and the usage for the variants will be for the RT-Host communication and in the internal sequencing engine. We recently had a discussion with some vague arguments implying possible creeping memory leaks with variants. For the "real" RT-tasks we generally try to stay away from all memory allocations of course.

    Your responses and this discussion helped to clear the accusations against our planned architecture :thumbup:

  9. Hi,

    I am currently using variants in a RT app without problems (basically always a typedef cluster with a Cmd-Enum and a Variant for the data).

    For a new 24/7 RT app I am wondering if I could run into problems using variants. Do you generally stay away from Variants in mid-size to large 24/7 RT apps?

    Does anyone has bad experience with <vilib>/Utility/VariantDataType/GetTypeInfo.vi in RT?

  10. Hi,

    I see similar speedups after removing dynamic dispatch.

    Another strange thing is that as soon as I open the project in LV 8.5 on Mac OS 10.5.2 one CPU core is 100% busy. No VI open just the project window. No change after recompiling, doing another "Save All", LV restart. Just opening the project automatically marks it as edited (Reason: "An attribute of the project was changed.")

    Enough playing with LV and hanging out at LAVA for me tonight :ninja:

    I´ll try it on WinXP at the office later this week.

  11. QUOTE(paololt @ Nov 17 2007, 06:09 PM)

    ok..i have realized the comunication between two pc; i send a string from one of them and i receive the same string.

    Hi,

    out of curiosity, how do you communicate between your PCs?

    ICMP, TCP or UDP and what kind of link layer is involved in your setup? I am just wondering if you can find any biterrors using standard hardware. As I recall correctly, there might be several CRC-checks present (TCP for sure and I guess Ethernet frames too) when using standard hw and network stacks.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.