Jump to content

A big dataset from subvi


Recommended Posts

What is the best possibility to get a big dataset from subvi to caller vi? Will this dataset be copied if I do it over connector pane?

I think better way is not to collect this data in subvi and try to design the subvi as easy as possible, but if I do it than my caller vis block diagramm will be much bigger in pixels.

Thanks, Eugen

Link to comment

QUOTE (Eugen Graf @ Jun 9 2008, 01:37 PM)

1. That's what wires are for, so use them! Putting data out on the connector pane does not necessarily make a copy.

2. The http://www.cs.cmu.edu/%7Ejch/java/rules.html' rel='nofollow' target="_blank">first rule of optimization is: Don't do it!

Your computer uses the same amount of electricity (in general) whether or not your code is efficient.

From the above link:

QUOTE

Write your program without regard to possible optimizations, concentrating instead on making sure that the code is clean, correct, and understandable. If it's too big or too slow when you've finished,
then
you can consider optimizing it.

Link to comment

QUOTE (jdunham @ Jun 10 2008, 12:07 AM)

Sorry, I don`t agree to this rule.

Link to comment

QUOTE (jdunham @ Jun 9 2008, 06:07 PM)

:blink: Wow - that's one hell of a generalisation. Even the page that it links to is over simplified. Whilst I agree, at least in part, with the sentiment, I've gotta give my standard response to generalisations like this: "it depends" :P Saying "Don't optimize as you go" is like saying don't write using punctuation. Also "...making sure that the code is clean... and understandable" can be considered forms of optimisation.

Link to comment

QUOTE (jdunham @ Jun 10 2008, 12:07 AM)

Ok, I see these rules are for Java, not for LabVIEW. But I would prefer in Java to programm modularely too. It can be good for beginners and small projects. If you have a big project you should, you have to make subvis or subfunctions online (during programming time) or you have one big function over 10-20 sites. And if you have it, than it is not to easy, to make some modules or subvis from it.

Link to comment

QUOTE (Eugen Graf @ Jun 9 2008, 03:55 PM)

This absolutely applies to LabVIEW.

The Java page was just the first one that came up on Google. People way smarter than me swear by these rules, in all languages. As my co-worker often says "the compiler is smarter than you". It's just not a good use of your time to try to outwit the development system, especially if it makes your code harder to read or harder to maintain. You should do what you are good at, which is coming up with nifty things to ask the computer to do for you, and you should let the computer do what it's good at, which is getting it done really freaking fast.

I think you will find that programs with clean designs and clean diagrams actually run pretty fast.

QUOTE (crelf @ Jun 9 2008, 03:27 PM)

:blink:

Wow - that's one hell of a generalisation. Even the page that it links to is over simplified. Whilst I agree, at least in part, with the sentiment, I've gotta give my standard response to generalisations like this: "it depends"
:P
Saying "Don't optimize as you go" is like saying don't write using punctuation. Also "...making sure that the code is clean... and understandable" can be considered forms of optimisation.

I don't think that's a useful definition of optimiZation (c'mon, crelf, you live in North America now :P ). Using punctuation is like having clean, easy-to-read LabVIEW diagrams that humans can understand, which should be the primary design consideration.

Human labor time is much more precious than computer time, so it doesn't make sense to optimize unless there is a problem (i.e. the user interface is sluggish for other humans), and if there's a problem, you can usually fix it by profiling the code and fixing it in just a very few places. If you write code that other humans (maybe yourself two years from now) are going to waste time understanding or debugging because it's so confusing, then where is the optimization in that?

I think there's a lot of value in having clean code and doing sensible things, but Eugen's original question was about whether he should make his code messy in order to fix a 'problem' that was just speculative.

BTW I learned all of this the hard way!

Link to comment
QUOTE (Eugen Graf @ Jun 9 2008, 03:37 PM)
What is the best possibility to get a big dataset from subvi to caller vi? Will this dataset be copied if I do it over connector pane?
Pass it through the conpane. LabVIEW will not make a copy just because you pass it to a subVI. As long as you are passing data using wires, you only have to worry about a copy being made if you fork the wire -- and LV will only sometimes make a copy then. If LV can avoid making the copy, it will do so. If you have a very large dataset, pass it into the subVI and then pass it back out again on the other side; don't fork the wire to pass to two different subVIs in parallel.
Link to comment

QUOTE (Aristos Queue @ Jun 10 2008, 02:32 AM)

Pass it through the conpane. LabVIEW will not make a copy just because you pass it to a subVI. As long as you are passing data using wires, you only have to worry about a copy being made if you fork the wire -- and LV will only sometimes make a copy then. If LV can avoid making the copy, it will do so. If you have a very large dataset, pass it into the subVI and then pass it back out again on the other side; don't fork the wire to pass to two different subVIs in parallel.

Thank You

Link to comment

QUOTE (jdunham @ Jun 9 2008, 03:44 PM)

This absolutely applies to LabVIEW.

I'm with Jason on this, but I think the original statement was maybe a little too forceful :D .

I refactor my code for readability continuously as I write it. And I (try to) always fix all known bugs before writing new code. However, I (almost) never optimize my code for speed or memory use or anything else but readability until I can identify a specific need to do so: It is more important to have readable code than fast code, and you can sacrifice one for the other later.

None of which really addresses Eugen's original question ;). But AQ took care of that, so it's all good.

Link to comment

Hi Eugen,

You are trying hard so let me help out a little.

In this link you will find my collection of "LabVIEW_Performance" tags. You will find a lot of postings by Greg McKaskle in that list since I am working my way through the forum chronologically.

One of those links (this one) has been called teh "Clear as mud" thread but it illustrates what Aristos Queue was talking about.

Ben

Link to comment

QUOTE (neB @ Jun 10 2008, 03:11 PM)

Hi Eugen,

You are trying hard so let me help out a little.

In this link you will find my collection of "LabVIEW_Performance" tags. You will find a lot of postings by Greg McKaskle in that list since I am working my way through the forum chronologically.

One of those links (this one) has been called teh "Clear as mud" thread but it illustrates what Aristos Queue was talking about.

Ben

Thank you Ben for links.

So, let me explain my problem.

I have to programm an application, which postprocess some data, so it shoulb be like MS Excel. My programm should read raw data from flash over 7 Slots x 15000 Pages x 60 Datasets (each dataset contains ca. 10 doubles as binary) and show this data in a table.

I don't show all the data on one table, but show one of 7 Slots and it's enough to make my PC slower and eats some RAM. The problem is, that this data is not only duplicated, much more the problem is the table with data. Because I convert binary data to ASCII !

It was the first part of the program. The second part should read the saved table convert it back to doubles (for plots) and make postprocessing. After poistprocessing I have to show the raw data and posprocessed data in one table. One row contains about 25 values! And this data is twice in RAM: doubles for plots and ASCII for the table.

Not easy to handle this big dataset, so it's realy necessary to reduce the memory usage.

Link to comment

QUOTE (Eugen Graf @ Jun 10 2008, 09:44 AM)

Thank you Ben for links.

...

The problem is, that this data is not only duplicated, much more the problem is the table with data

....

Not easy to handle this big dataset, so it's realy necessary to reduce the memory usage.

Too many issues for me to address all durring a break!

For tables makes sure you are using LV 8.5 or above. Older versions had slower tables. "Defer FP updates" before updating tables usally helps.

Action Enginces are great for situations were the same large set of data is accessed in different ways for different functions. Just make sure you "work in-place" as much as possible.

have fun!

Ben

Link to comment

QUOTE (Justin Goeres @ Jun 10 2008, 05:52 AM)

I did feel bad about jumping all over the optimization point without offering anything constructive. Now that there is more information available, I can see you're doing it exactly right, Eugen. Code it up the easiest way first, and see whether it's fast enough, and only try to mess with it if it has a serious problem.

QUOTE (Eugen Graf @ Jun 10 2008, 06:44 AM)

I have to programm an application, which postprocess some data, so it shoulb be like MS Excel. My programm should read raw data from flash over 7 Slots x 15000 Pages x 60 Datasets (each dataset contains ca. 10 doubles as binary) and show this data in a table.

I don't show all the data on one table, but show one of 7 Slots and it's enough to make my PC slower and eats some RAM. The problem is, that this data is not only duplicated, much more the problem is the table with data. Because I convert binary data to ASCII !

It was the first part of the program. The second part should read the saved table convert it back to doubles (for plots) and make postprocessing. After poistprocessing I have to show the raw data and posprocessed data in one table. One row contains about 25 values! And this data is twice in RAM: doubles for plots and ASCII for the table.

I would consider keeping the storage of the huge datasets separate from the user interface. Instead of keeping them in ASCII, you could keep all the data in native binary, stored in a functional global (uninit shift register).

You can't look at all that data at once anyway. You could just pull small segments of the data out of the storage and write them to the table. If someone modifies the table, you catch the event and update the storage in the right place. If the user scrolls the table or changes pages, then you go back to the storage, and get the new section of data and toss it into the table. Even if they scroll around a lot and your code has to do a lot of gymnastics it may still be quite a bit faster than keeping all the data in the table, and using the table as a storage mechanism.

Jason

Link to comment

QUOTE (jdunham @ Jun 10 2008, 04:48 PM)

Bugger - I'd thought up some good points to add to this thread while I was on the plane, but now I log on, they've already been said :(

QUOTE (jdunham @ Jun 9 2008, 07:44 PM)

I don't think that's a useful definition of optimi
Z
ation (c'mon, crelf, you live in North America now
:P
).

Just quietly, between you and me, I could never remember which one to use in Oz either - I always just let the spellchecker figure it out ;)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.