Jump to content

Enum in an array constant resizes.


Recommended Posts

I have a pet peeve of LabVIEW 7.1. :angry:

I have a strict-typed enum constant inside of an array constant on the block diagram. Sometimes I resize the width of the enum to see the full text. When I update the strict-type of the enum with new values, the width of the enum reverts back to some standard width. I cannot figure out what LV uses as the default width and where this comes from. It doesn't appear to use the smallest text width or the largest. I wish LV kept the width as I had originally set it. If no-one has any solutions, I'll move this post to the wish list...

Link to comment

There was discussion on this issue a few months back on the Info-LabVIEW list. Here's what NI R&D had to say about it:

stephen mercer to PJ, info-labview, me

Aug 1, 2005

PJM wrote:

> Khalid & all

> > Also, another issue I have noticed is with

> > enum-typedefs in an array

> > constant. It appears that whenever the typedef is

> > updated, the array

> > constant resizes itself! This is in 7.1.1.

>

> I did noticed this myself (on array of stric type def

> enums), and this is really annoying.

Annoying... except for those people who want their arrays to resize.

"Whether we should update the array" is is one of those areas where every programmer is going to have his or her preference, but that preference is going to vary, possibly on each individual enum array. We have one developer on the LV R&D staff who is particularly studying typedef updates, trying to pull off the neat trick of "reading the user's mind." But, as you might imagine, the problem is fairly complex. The array has a list of current values. The enum defines a list of all the possible values. Should the minimum size be

a) the text size of the currently visible array element(s)

b) the text size of the longest array element

c) the text size of the longest enum element (even if that element isn't currently in the array)

d) no minimum bound

Let's say we allow (as we do today) arbitrary sizing of the enum within the array. If the array's enum display has been manually resized smaller than the array's longest current element, then it is probable that the user doesn't want it to update if the elements change. But what if they edit the array such that all the elements fit in the current resize area and then they edit the elements a second time, such that one element doesn't fit. Should the array now grow? Is an edit to the text of the enum (which applies itself to the array) more important of an edit or less important than editing the values of the array? What if this user is one of those programmers who shrunk the array down so the text is only as large as the particular element currently visible in the array. These programmers probably want it to resize when they change the value, not just if they were to update the typedef. The problems have more variations and complexities if not only is the the enum a strict typedef, but also the array of enums is itself a strict typedef.

After three versions of LV with typedefs on the diagram, we are finally getting enough data feedback to make some better decisions about when/how to update instances. But some very fine lines exist between the desired behavior and the undesired behavior, and trying to store enough data about each instance to make that judgement is non-trivial. At some point in the future, this will probably improve. I'm posting this info as a way of highlighting the complexity of this issue. Even though some of the desired behaviors posted in this thread (and earlier threads on info-LV) would be easy to implement, those changes often conflict -- in surprising ways -- with other desired behaviors.

Pojundery,

Stephen R. Mercer

-= LabVIEW R&D =-

Link to comment

Nice reply from Stephen, but he's not offering any solutions. The implementation seems simple to me but of course I don't know the inner workings of LV. Just keep track if the user has manually resized the constant. If they have, then keep it that way. Auto-update shouldn't update the width. It seems that this is the most prefered behaviour.

Link to comment
I have a strict-typed enum constant inside of an array constant on the block diagram. Sometimes I resize the width of the enum to see the full text. When I update the strict-type of the enum with new values, the width of the enum reverts back to some standard width.

Irritating behaviour indeed. But there is a somewhat laborious workaround: after making changes to the enum typedef, select the enum item with the widest text string in the strict typedef control, then select "apply changes". This will cause the array widths of the various connected diagram constants to scale to a width sufficient for the widest item. That way, no enum item text gets hidden.

Note that this requires all VIs that reference the typedef for an array diagram constant to be loaded in memory when applying the changes. On applying the changes, these VIs will be marked as having been modified. You'll either have to save them individually, or select "save all" in order to persist these diagram modifications.

Link to comment

Not a workaround, but a possible solution for NI (hopefully someone there is reading): handle enum-/ring-length same as text controls & constants. This means, add a checkable entry "size to text" to the rmb-menu. By having it checked, the enum (and therefore the whole array where the enum resides) resizes when a different enum-element is selected to it's text-size, otherwise it doesn't.

I often use this feature when applying text-array-constants:

1. put the array on block diagram

2. check the option

3. type in the longest array element

4. uncheck the option

5. type in the other elements.

...But also here is quite some room for varying opinion, since checking the "size to text" option in a text-array, resizes the array to the size of the element you clicked on, I would prefer a resize to the largest. :blink:

...at least it would be contiguous throughout LV.

Didier

Link to comment
Not a workaround, but a possible solution for NI (hopefully someone there is reading): handle enum-/ring-length same as text controls & constants. This means, add a checkable entry "size to text" to the rmb-menu. By having it checked, the enum (and therefore the whole array where the enum resides) resizes when a different enum-element is selected to it's text-size, otherwise it doesn't.

I often use this feature when applying text-array-constants:

1. put the array on block diagram

2. check the option

3. type in the longest array element

4. uncheck the option

5. type in the other elements.

...But also here is quite some room for varying opinion, since checking the "size to text" option in a text-array, resizes the array to the size of the element you clicked on, I would prefer a resize to the largest. :blink:

...at least it would be contiguous throughout LV.

Didier

This is a good idea, but I will argue that there can be even a simpler one. Just add an option for disabling this auto resizing all together (whether as a global ini settings in the labview.ini, or per enum typedef through the right click menu).

I would rather have the capability to set my own size permanently rather than having LabVIEW doing it for me (until LabVIEW can "read my mind" :P ).

PJM

Link to comment
  • 5 months later...

I have a stict typdef cluster that resizes on the BD, and I don't like it either.

Doesn't it make sense that a typedef control should have a block diagram with editing limited to control's top level container (object, cluster or array)?

You wouldn't be able to wire to/from the control, but you could at least configure it's appearance when placed on a block diagram constant.

Now, how do I get this to the NI Developers Brainstorming forum for implementation in 8.??? :D

Link to comment
I have a stict typdef cluster that resizes on the BD, and I don't like it

I'm getting a little confused since you seem to be using the FP and BD descriptors interchangably, but aren't you after a typedef, not a strict typedef?

Link to comment

I guess I'm bugged that there is no BD layout capability for a typdef'd control.

I use a typdef to define the data stored in a functional global. I place a constant of this on the BD of the functional global to initialize an array stored in a shift register. It just happens to be defined as strict.

post-949-1152113416.gif?width=400

If I add an element to the typedef (say a boolean), it will, as expected, appear properly on the front panels where used. OK. What's the problem? The constant on the BD of the functional global will resize and rearrange from the cosmetic rearrangement in the functional global.

post-949-1152113692.gif?width=400

It doesn't matter how I have the typedef set (plain or strict, autsizing, autoaranging, etc) the BD constant will always reset as shown :angry:

If the constant is placed inside a case and it parts arranged/scaled to fit the window, a change will make the constant exceed the case window size.

Before:

post-949-1152114796.gif?width=400

After:

post-949-1152114811.gif?width=400

I think in Michael's orignal post, he was placing the typdef'd enum in an array constant on the BD. If he added an item to the enum, his enum in the array constant (queued state machine) would resize, to some arbitrary length, making the code unreadable. It would be nice to have the ability to set the way a typdef'd control appears on a BD. Right now, LabVIEW "does what it wants". I would like the control editor to allow us to view and configure the BD appearance of a typdef'd control. Why? Because I want to... :P

Maybe if I had the time to learn and use GOOP, these sorts of problems would go away... Maybe I should change my signature to "almost as good as some other people" :laugh:

p.s. I looked in the Wish List, and couldn't find it.

Link to comment
I guess I'm bugged that there is no BD layout capability for a typdef'd control.
The solution I use for this is simple - whenever I have a typedef representing some data, I create a VI which has that typedef as an indicator on its FP set to be an output and use it as a stub inside the diagram. From that point on, regardless of the changes you make, the size on the diagram will remain the size of that VI's icon.
Link to comment
The solution I use for this is simple - whenever I have a typedef representing some data, I create a VI which has that typedef as an indicator on its FP set to be an output and use it as a stub inside the diagram. From that point on, regardless of the changes you make, the size on the diagram will remain the size of that VI's icon.

I like this technique, too.

Link to comment
I like this technique, too.

Ditto that.

Another variation is to put a typdef'd cluster in a sub-VI and bundle by name to produce what is similar to a "#Define" construct.

The sub-VI is used any time I would have used the "variable".

It yields similar to results of the "pre-compiling" in other languages. Change one place, and update realized in all places that need to know.

Do a nice icon and your development team members aren't even tempted to look inside.

post-29-1152459883.jpg?width=400

Ben

Link to comment
Do a nice icon...

Not that I specifically condone it, but I've seen the same technique with the subVI's icon a representation of the type def itself:

Cluster or subVI? :arrow:

post-181-1152466949.png?width=400

...and your development team members aren't even tempted to look inside.

Your development team sure isn't like mine :D

Link to comment
Do a nice icon and your development team members aren't even tempted to look inside.

post-29-1152459883.jpg?width=400

I guess I'll try this technique. It does provide a bit of hiding that might prevent someone from tinkering with my typdefs.

Unfortunately, most of the icons I've seen are B/W with oversized abbreviated text that means little or nothing; not to mention missing any sort of description :angry: . When I make what I consider nice icons, I get comments that I have too much time or that I'm showing off!

Link to comment
I guess I'll try this technique. It does provide a bit of hiding that might prevent someone from tinkering with my typdefs.

Unfortunately, most of the icons I've seen are B/W with oversized abbreviated text that means little or nothing; not to mention missing any sort of description :angry: . When I make what I consider nice icons, I get comments that I have too much time or that I'm showing off!

I remember waiting for a TA to critique some code I had written.

The only feed-back I was offered was "Too many comments!"

Icons in LV, when properly used, are part of the documentation. In an application with hundreds of sub-VI's I can spot an icon I am after much faster than reading text.

Ben

Link to comment
Icons in LV, when properly used, are part of the documentation.

I agree whole-heartedly. I still don't understand when people just use the VI's name in the icon - to me, that's just plain stoopid - the VI's already got a name! That said, when doing OO, I still like to have the class name at the top of the icon and the method at the bottom, and a pretty picture in the middle. It's also good to differentiate classes by colour:

post-181-1152541655.png?width=400

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.