Jump to content

Object Serialization


Recommended Posts

I'm unable to write comments on that document <headscratch>, so I'll ask here: Could you post a PDF in addition to (or in lieu of) the docx? (The doc references embedded pictures, but I'm not seeing any - probably a portability prob that hopefully PDF can solve.)

Alternatively, maybe paste the RFC into the community page itself, in case the Community doesn't index docs?

Thanks!

Link to comment

I'd really want variant and waveform attributes to be fully serialisable for it to be useful for my use cases - the thing is that they're still one of the better map/dictionary implementations for storing arbitary metadata about a dataset. This is my biggest bugbear abpout the curtrent (Un)Flatten (from)to XML. Oh yes, and in the UK we also use '.' as a decimal point even if we do put the dates as dd/mm/yyyy rather than mm/dd/yyyy :) !

Link to comment

Jack: I replaced the docx with PDF.

I'd really want variant and waveform attributes to be fully serialisable
Generally, that's not going to be feasible, as I said in the document... an open ended type system doesn't work for a general serialization architecture. I've found zero instances of anyone building one successfully without sacrificing either performance or extensibility. The attributes of waveforms can be handled as flattened binary embedded in XML or JSON, but that's not the same as making them readable in XML or JSON.

Now, having said that, if you're serializing an object that has a variant field that is, for example, one of the key-value databases, you're free to add each attribute as its own value to the property bag, calling the appropriate type function.

Oh yes, and in the UK we also use '.' as a decimal point even if we do put the dates as dd/mm/yyyy rather than mm/dd/yyyy :) !
Damn Brits! As my physics teacher used to say, "Behave or I'll make you compute that in BTUs! You think thermodynamics is hard, wait until the British get involved!" :-)
Link to comment

Ironically it's only in the United States that British thermal units are used in practical engineering.

and even when it is metric, it has to be ******* cgs !

Back on topic, I forgot to mention that I also like complex waveforms for representing (2D) vector data, so that'd be a nice thing for a serialiser to grok.

Link to comment
I forgot to mention that I also like complex waveforms for representing (2D) vector data, so that'd be a nice thing for a serialiser to grok.
Since I'm not planning to support complex as a scalar type, complex waveforms would be particularly nasty to support. I think we'd have to admit the scalar complex first.
Link to comment

AQ.

I presume your reluctance to support many of the types in LabVIEW is down to reconciling the speed and compactness of binary with the easy (albeit slower and bloated) representation of portable text based. Perhaps a different way of looking at this is to separate the binary from the text based serialization. After all. Aren't classes just XML files?

All scalars and objects can be represented in XML, JSON and even ini files since the standards are well defined. The string intermediary is a very good representation since all types can all be represented in string form. An API with only these features would be invaluable to everyone including us muggles (jki config file VIs on steroids). We could then add more formats as the product matures..

The flatten already accepts objects but just doesn't quite serialize enough. That could be addressed to provide the binary.

This is actually one feature that would budge me from LV 2009

Edited by ShaunR
Link to comment

> I presume your reluctance to support many of the types in LabVIEW

ShaunR: That's part of it. Just as large a concern is the complexity added to developers of Serializers having to work with all the types, and the work that Formatters having to handle all of the types.

I do keep looking at JSON's 5 data types and thinking, "Maybe that would be enough." But I look at types like timestamp and path, and I know people would rather not have to parse those in every serializer or serializable, and *those* *aren't* *objects*. That historical fact keeps raising its ugly head. They don't have any ability to add their components piecemeal or to define themselves as a single string entity.

Link to comment

Indeed, things would be much easier for this application if scalars and composite types like timestamps and paths were objects, but of course that would open such a huge can of worms for practically every other situation that I shudder to think of how NI could ever even consider moving from that legacy.

Out of curiosity, I'm wondering if there's a creative way of handling arrays with the scripting magic? Maybe upon coming across an array some method is called to record info about the array rank (number of dimensions, size of each), then the generated code would loops over each element using the appropriate scalar methods to handle serialization of each element? This could potentially allow support for arrays of arbitrary rank, but would likely be slow as it would involve serializing each element individually.

Just thinking aloud for now, I don't have time to really think it through thoroughly yet.

Link to comment

> I presume your reluctance to support many of the types in LabVIEW

ShaunR: That's part of it. Just as large a concern is the complexity added to developers of Serializers having to work with all the types, and the work that Formatters having to handle all of the types.

I do keep looking at JSON's 5 data types and thinking, "Maybe that would be enough." But I look at types like timestamp and path, and I know people would rather not have to parse those in every serializer or serializable, and *those* *aren't* *objects*. That historical fact keeps raising its ugly head. They don't have any ability to add their components piecemeal or to define themselves as a single string entity.

I would actually argue that maybe 1 type is enough and the problem is purely string manipulation. However, that excludes the binary (hence my suggestion).

My JSON VIs, the JKI config file and the rather splendid library posted in the Setting Control Property By Name thread are all about "untyping" and "re-typing". I have found strings far superior to any other form for this since all labview datatypes can be represented in this way and, for human readable, have to be converted to them anyway, The introduction of the case statements support for strings has been a god-send.

I'm not sure what you mean by " They don't have any ability to add their components piecemeal or to define themselves as a single string entity.". They are still just collections of characters that mean something to humans. And we are not talking about adding functionality to an existing in-built object are we?

.

Indeed, things would be much easier for this application if scalars and composite types like timestamps and paths were objects, but of course that would open such a huge can of worms for practically every other situation that I shudder to think of how NI could ever even consider moving from that legacy.

Out of curiosity, I'm wondering if there's a creative way of handling arrays with the scripting magic? Maybe upon coming across an array some method is called to record info about the array rank (number of dimensions, size of each), then the generated code would loops over each element using the appropriate scalar methods to handle serialization of each element? This could potentially allow support for arrays of arbitrary rank, but would likely be slow as it would involve serializing each element individually.

Just thinking aloud for now, I don't have time to really think it through thoroughly yet.

N rank arrays are fairly straight forward to encode and decode if using strings as the base type (it's a parsing problem solved with recursion where the minimum element is a 1D array). Size and dim is really only required for binary, and the flatten already does that. The real difficulty is decoding to labviews strict typing since we cannot create a variant (which circumvents the requirement for typed terminals) at run-time. We are therefore forced to create a typed terminal for every type that we want to support and limit array dimensions to those conversions we have implemented.

I think maybe you are looking at it from the wrong end. Timestamps and paths are really, really easy to serialise and so is the data inside an objects cluster (we can already do all of this). In fact. Paths and timestamps are objects, but, apart from their properties and data-not a lot of good to properly serialise since we cannot create them at run-time (I've been dreaming of this for decades :) )..

Link to comment

The last comment in the document talks about right-to-left languages, but I don't think you have to deal with it. As far as I know, the sequence of chars for R2L languages is stored exactly as it would be for any other language and the display is responsible for correctly displaying it. Numbers are always displayed L2R, even you use the Arabic number system.

For example, assume that this is a series of characters, where the letters represent Hebrew letters:

ABCD, 1234 - EFGH.

The display code should be responsible for correctly reversing what needs reversing, so the correct result should look like this:

.HGFE - 1234 ,DCBA

Some programs don't know that they need to do R2L, so they would display the string as it's shown in the first line.

LV, by the way, only has partial support for R2L display, so in this example LV would incorrectly display the period to the right of the A instead of to the left of the H. To correctly display such strings in LV I need to place the non-Hebrew chars at the the wrong end of the string.

This only refers to a simple series of characters, but since those are the only strings that LV can handle, I don't think you need to do more than that.

Link to comment
I'm not sure what you mean by " They don't have any ability to add their components piecemeal or to define themselves as a single string entity."

I think he was referring to the underlying data structure of these composite types. I have no idea how LabVIEW holds path data, but conceptually a path is really just an array of strings. For example ["c:", "foo", "bar"] only becomes "c:\foo\bar" when it's acted on in a Windows environment. Each OS has their own grammar on how paths are represented, and on what's legal in the context of individual path elements. Ideally it would be nice to simply inherit the serializable behavior of the underlying components of these types for "free".

The real difficulty is decoding to labviews strict typing since we cannot create a variant (which circumvents the requirement for typed terminals) at run-time.

Exactly. If we want to treat array itself as a unit of serializable content, you'd be forced to create an interface for 1D, 2D, 3D, etc, for each supported data type due to strict typing. I was thinking that arrays could perhaps be a non-issue if the "magic" part of the code first calls a method to record array properties then delegates data serialization to the individual elements.

Link to comment

The biggest problem with using string for common data types is that it leaves the formatting of that data up to each individual Serializable class to define the format. If you have N objects encoded into a file each of a different class and each one has a timestamp field, you can end up with N different formats for the strings. On the other hand, if we give Formatter alone knowledge of the timestamp (and other types of interest), it can have methods to control the formatting and parsing, and then we leave those off of the PropertyBag class. I'll draw it up and see what that looks like.

Link to comment

The document specifies that the default value for the serializer's "Skip fields with default value?" option is true.

Why is that? If one of the major design goals is to communicate outside of LV ,shouldn't the default be to prefer verbosity and safety over performance? This will also mean that users who aren't aware of the exact details will be guaranteed to get the data they need.

Link to comment

The biggest problem with using string for common data types is that it leaves the formatting of that data up to each individual Serializable class to define the format. If you have N objects encoded into a file each of a different class and each one has a timestamp field, you can end up with N different formats for the strings. On the other hand, if we give Formatter alone knowledge of the timestamp (and other types of interest), it can have methods to control the formatting and parsing, and then we leave those off of the PropertyBag class. I'll draw it up and see what that looks like.

Hmmm. I'm not sure what you have in mind (need to see the diagram I guess). The serializable just has a time (in whatever base format you like-integer, double, etc.....but something useful since that will be the default) it's just of "type" string. The "formatter" is still the modifier from this base format. The default serialize is obviously whatever you decide is the base. But that can be overridden by the formatter to produce any format you like.

Link to comment

I finally had a chance (after travels and paper submissions) to read the document. This topic is quite important to us. (You can read my related idea here: http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Support-serialization-of-LabVIEW-objects-to-interchangeable-form/idi-p/1776294).

Background: We are approaching this from the use case in which we serialize/deserialize objects we share with nonLabVIEW and LabVIEW applications. For the former we write custom code for each class (yuck) until we get to something EasyXML can handle; for the latter we use the native LabVIEW VIs to flatten to string or XML without any customization. Both approaches work for their use cases but the first is quite cumbersome, so we will be happy when there is another option. I think this functionality is essential, not just a nice-to-have (and hence deserves the proper emphasis from NI), and I think it will greatly enhance the marketability of LabVIEW for use in large systems, where LabVIEW applications are almost certain to need to interact with nonLabVIEW applications. (In other words, the potential benefits could be quite substantial.)

Some comments I have on AQ’s draft document (v 0.4):

Nomenclature (trivial): “Deserialize” is a much more commonly used term than “unserialize.” (Check the results from a Google search.) I’m guessing that the particular scheme AQ followed used “unserialize”? Both seem appropriate to me.

Inheritance relationship for Serializable (just pondering): Anytime I have to inherit from a class in a language that doesn’t have multiple inheritance or interfaces I become concerned. On the other hand, since every serializable class must call its parent, this may make sense in this situation. At first glance I think it will work for our use cases (but see below).

Terminology (minor; probably already in the works): I realize “Magic Serialization Scripting Tool” is just a conceptual (and facetious) name, but, NI, please don’t use “scripting tool” in the name of the final product. First, it makes me wonder if it is a tool that will help me write scripts (which it is not). Second, scripting has a connotation for me of something that is at least possibly ad hoc, temporary, and that possibly won’t be supported someday. (Others may think quite differently, of course.) I’m sure AQ’s plan is to make this permanent (right?), and I realize there will still be scripting underneath, regardless of the name, but I want this just to be a “Tool” or “Wizard” or the like.

Default values not described (reiteration of major issue): This is the most important limitation (for external sharing) of the present implementation of LabVIEW’s object serialization schemes.

Nomenclature (important but won’t change implementation): The document refers to a “standard Factory pattern.” This might seem picky, but I’d like to see more careful wording. The Gang of Four Design Patterns book describes Abstract Factory and Factory Method patterns, and these are different from what the document refers to here. Head First Design Patterns includes a “Simple Factory” “programming idiom” that I think is effectively what applies here, although the implementation differs (reading the object from disk is certainly not the only solution, nor even the most obvious). Yes, there are websites that refer to “Factory Pattern” as well but what the authors mean is often ambiguous. Note that I think the proposed solution is fine for the purpose. I am just urging a little more care in terminology.

On types (critical):

  1. How will this handle DBLs with units? (We have done this within the constraints we have—and I am happy to share it with AQ; it is nontrivial.) In an XML representation the units should appear as attributes. I think the schema must handle this correctly.
  2. ShaunR asked about clusters. The document says, “Some, like, numeric clusters, can be done by writing the individual fields (Point.X and Point.Y as two separate properties in the property bag).” Where and how does this happen? I didn’t see this in the document, but perhaps I overlooked it. (I hope this doesn’t require customization.) I think it would be really helpful to create a UML class diagram so that we can see the different classes and their relationships. (Hint, hint!)

Objects as attributes (critical but probably in place): One of my colleagues was curious how this approach handles objects that have other objects as attributes (what the document calls “complete objects”). It seems pretty clear that AQ thinks there is a way to do this, and the document even describes a tree structure, but we didn’t see anything in the document that explains how this works. Again, a UML diagram would help a lot. (I’m wondering how many designers of LabVIEW Object-Oriented solutions would write code of any significant complexity without creating a UML or equivalent model first? I can say our designs are much better when we use UML, even if we iterate between the model and source implementations a couple times. I consider models to be an essential, not an optional, step in the design process.)

Representing object-typed attributes (critical): How will this approach represent attributes that are objects? I guess I’m thinking mostly of the XML representation. We opt (and I think this is the most correct way) to use the attribute name as the XML tag and the type as an attribute. (Note that the type is necessary since it is possible for the attribute itself to be defined as an abstract class, but the deserializer will need to know the instantiated class.) [Related note: Similarly, accessor methods should change to use the attribute names, not the types.]

The four required methods (minor): Presumably any schema would only need either the version with names or the version without names, correct? So it usually wouldn’t be necessary to create all four required methods per class, but only two. On the one hand there is clutter. On the other hand AQ might have to subclass Serializable and that would mean only one option would be available.

EasyXML (major?): The paper describes delegating to EasyXML. I’m not quite sure how that works with object-type attributes, since EasyXML can’t handle objects even as variants. Maybe the framework handles this separately, but somewhere the object needs to end up in the XML representation, and I don’t yet see where this happens.

As far as multi-dimensional arrays go I’d recommend looking at how EasyXML handles this (minor, probably already considered). Maybe this won’t work for this effort because those methods rely on variant representations of the data, if I recall correctly, but there still might be something there that is helpful.

Response to open issues: The only issue that I can comment on right now is the one concerning arrays. Definitely we would need 2-D arrays, quite possibly 3-D. Dimensions beyond that would be nice to have from my perspective.

Overall, this looks promising. We are looking forward to seeing the prototype!

Link to comment
  • 4 weeks later...

(I tried posting over on NI's site, but I was unable to upload images and I got tired of fighting with it.)

I've been studying the document and building mock-ups trying to understand how the pieces interact. I have some concerns over the amount of flexibility it provides and the division of responsibilities, but I'm not sure I'm interpreting the design correctly. I created a class diagram and two sequence diagrams (for "flatten with serializer" and "flatten without serializer") based on what I've been able to extract from the document. Am I on the right track with these?

post-7603-0-53412500-1341204071_thumb.pn

The document describes class relationships with two sets of circular dependencies. Circular dependencies aren't inherently bad, but when they exist I do sit up and take notice simply because they can be so troublesome.

1: Serializer <-> PropBag

2: Serializer -> Serializable -> PropBag -> Serializer

(For simplicity on the diagram I combined the two Property Bag classes into a single class, even though they do not share a common parent. The following diagrams refer to "PropMap" (PropBag with names) and "PropList" (PropBag without names) because it requires less thinking on my part when the name of the abstract data type is part of the class name.)

post-7603-0-33057200-1341203915_thumb.pn

This shows my interpretation of the object interactions when a serializer implements Flatten without connecting to PropertyBag.Initialize. This seems to be a fairly straightforward batch-style process. However, how does the serializer get the properties from the PropertyMap so it can apply the metaformatting? The remove methods require a key (ie. name) to retrieve the value, and the serializer doesn't know them.

Even if the serializer does know all the property names, my gut says property names and values aren't sufficient. An xml serializer might need to include type information or other metadata along with the name and value. I don't see how this sequence supports that... unless the expectation is users will write XmlEnglish(US), XmlEnglish(GB), etc. classes.

post-7603-0-90450900-1341203915_thumb.pn

I realized the diagram is wrong while I was typing this up. Specifically, the diagram shows Formatter calling Serializer.<type>ToString. That should be named something like "SerializeProperty" and accept a string. (The document doesn't mention this method by name but alludes to its existence.) The diagram also shows the serialized property being returned back through the call chain to the serializable object. Page 11 is ambiguous about which class actually maintains the serial string while it's being built. I don't think that's an important detail at the moment.

My concern is in the very different ways a specific serializer is implemented. If a serializer enables in-line serialization by not connecting the PropertyBag.Initialize Serializer terminal, then it will need to override SerializeProperty. If the PropBag.Initi Serializer terminal is connected, then SerializeProperty never gets called and devs don't need to override it. I think this is more confusing than it needs to be.

------

I'm thrilled Stephen is spending brain cycles thinking about this problem. My overall impression is the library is trying to compress too much functionality into too few classes in an attempt to make it "easy" for users and the classes end up relatively tightly coupled. One clue comes from the description of the Serializer class on page 4:

"Serializer – A class that defines a particular file format and manages transforming Serializable objects to and from that format."

Having "and" in the description is often an indicator the class should be split. Perhaps a SerialFormat class would help? Another indicator is how a serializer's Flatten/Unflatten behavior changes based on the inputs to PropBag.Initialize. Serialization is the kind of thing that could need any number of custom behaviors. Instead of restricting us to the two designed behaviors, why not implement a SerializationStrategy interface that allows users to easily define their own custom behaviors?

post-7603-0-00814600-1341199284_thumb.pn

This is a class diagram I put together to illustrate the kind of flexibility I'd like to see. I haven't put anywhere near enough thought into this to claim it is a good design or meets all the use cases Stephen identified. I can already see errors in it, so don't take it too literally. It's just a way to show how the different responsibilities are divided up among the classes in the library.

I don't think it's that much different from Stephen's design. The main differences are:

- Serializer is purely an api class. All functionality is delegated to implementation classes. Serialization behavior is changed by configuring the implementation classes and injecting them into the Serializer object instead of using option switches.

- The serialization process is implemented by Strategy subclasses, not by Serializer subclasses. The hope is this will decouple the serialized format from the computational process of obtaining the serialized string. They have orthogonal considerations and constraints. Separating them provides more flexibility.

- The intermediate format defined by the PropBag classes is wrapped up in a single "Serialization Intermediate Format," or "SIF." This class can be replaced with child classes if the default SIF doesn't meet a user's needs. (Allowing users to serialize to a custom XML schema seems particularly tricky.)

If you ask me to explain the details of how something works I'll respond by waving my hands and mumbling incoherently. The primary idea is to allow more flexibility in mixing and matching different capabilities to get the exact behavior I need.

Link to comment

(I tried posting over on NI's site, but I was unable to upload images and I got tired of fighting with it.)

<snip>

If you ask me to explain the details of how something works I'll respond by waving my hands and mumbling incoherently. The primary idea is to allow more flexibility in mixing and matching different capabilities to get the exact behavior I need.

Interesting. So your SIF is "untyping" and "re-typing" using strings also.

Not sure what the "Culture" is for since file formats are locale agnostic. Is this to cater for decimal points and time? I'm also not sure of the need for a "Strategy" interface unless it is just from a purist point of view. After all. If you wire an object to the Serialize class you want it saved right away before you read it again, right? Perhaps you can expand on the use case for this?

I think the only real difference from what "you would like to see" and what I was envisioning is that the SIF Converter would actually be one of the Formats (JSON probably if it were up to me) meaning that the "Formatter" coverts from JSON to the others (they override the default). However, that is an implementation specific aspect so as not to re-invent the wheel and there is no reason why it cannot be a propriety syntax

I suppose one other difference is that I would probably not have the "Human Readable" interface and each file format (binary, JSON, XML et. al.) would have a discrete "Formatter" for it's implementation. In this way, different file formats have a unified interface (JSON in my example) and the formatter/file saving is a self-contained plug-in that you just pop in the directory/lib

Link to comment

Interesting. So your SIF is "untyping" and "re-typing" using strings also.

I only came to this thread and read it after having trouble posting on NI's site, so I haven't fully absorbed all the information here. But yes, using strings as an intermediate representation made the most sense to me for the same reasons you mentioned--they can all do it and we have to convert to a string anyway.

Not sure what the "Culture" is for since file formats are locale agnostic. Is this to cater for decimal points and time?

The format itself may be locale agnostic, but the data within the format is not. Culture is to convert certain kinds of data into the expected format. It is only used with human readable formats.

For example, suppose I want to serialize a class containing a date to an .ini file. The .ini file will contain,

Date=07/01/12[/CODE]

What date is that referring to? July 1, 2012? Jan 7, 2012? Jan 12, 2007? We don't know unless we look at the serializer's documentation to see how it formats dates. The harder question is what date format [i]should[/i] the serializer use? Answer: Because the format is intended to be read by humans it should use the format the user wants it to use.

I'm also not sure of the need for a "Strategy" interface unless it is just from a purist point of view. After all. If you wire an object to the Serialize class you want it saved right away before you read it again, right? Perhaps you can expand on the use case for this?

(Calling Serialize.Flatten does not "save" the data. It just converts it into a string. What you do with the string is up to you.)

There are several ways one can go about converting a class to a string, each with advantages and disadvantages. AQ identified two of them in the document. What I call "batch" processing converts all the class data into an intermediate format, then converts the intermediate format into the serialized format. For most users this will be sufficient. "Inline" (perhaps "pipelined" would have been a better word) processing converts each data element to the intermediate format then immediately into the serialized format. This will be faster and use less memory when serializing large data sets.

There are other strategies end users could potentially need. Maybe I've got a large array that needs to be serialized and I want to take advantage of multi-core parallelism. Or maybe I've got a *huge* data set and a cluster of computers ready to help me serialize the data. (Ok, that's not a common scenario but roll with me...) The Strategy interface is where I implement the code defining the overall serialization process.

In the existing design the Serializer class implements both the format and the strategy. I'd have to create a new subclass for each format/strategy combination. BinaryBatch, BinaryParallel, XmlBatch, XmlParallel, etc. That's (potentially) n*m subclasses. Separating the Strategy and Format into different classes only requires n+m subclasses. It also makes it easier to reuse and share formats and strategies.

I think the only real difference from what "you would like to see" and what I was envisioning is that the SIF Converter would actually be one of the Formats (JSON probably if it were up to me) meaning that the "Formatter" coverts from JSON to the others (they override the default). However, that is an implementation specific aspect so as not to re-invent the wheel and there is no reason why it cannot be a propriety syntax

I only know the XML model from a high level and I know less about JSON, so much of this is speculation.

XML and JSON are typically used to describe an entire hierarchical data structure. As AQ mentioned, this presents difficulties if you want to pipeline the serialization or deserialization of a large data set. You need the enitre document before you can understand how any single element fits into the structure.

I pulled this JSON example from wikipedia.

[CODE] { "firstName": "John", "lastName" : "Smith", "age" : 25, "address" : { "streetAddress": "21 2nd Street", "city" : "New York", "state" : "NY", "postalCode" : "10021" }, "phoneNumber": [ { "type" : "home", "number": "212 555-1234" }, { "type" : "fax", "number": "646 555-4567" } ] } [/CODE]

Suppose a class serialized itself to the above JSON code. Now take an arbitrary data string, [i]"number" : "646 555-4564". [/i]As far as the software knows it's just a string like any other string. It doesn't have any meaning.

Scenario 1:

For whatever reason you need to change the way the phone number is represented on disk. Maybe instead of "xxx xxx-xxxx" you need to format it like "xxx.xxx.xxxx." The formatter needs to identify this particular string as a phone number so it can apply the formatting changes. How does it do that?

Scenario 2:

Instead of saving the data in JSON format, you want to save it in an .ini file. You can't write "number=646 555-4564" because each phone number in the list will have the same key. The serializer needs to know the context of the number in order to give it an appropriate key and/or put it in the correct section. Unfortunately the data string doesn't provide any context information. What do we do?

SIF (and AQ's intermediate representation) describe each data element and include contextual information about the element. Instead of just receiving "number : xxx xxx-xxx," SIF could describe the data using a structure something like this:

Name - "JohnSmith.PhoneNumbers[1].number"

Value - "646 555-4564"

Type - "Phone Number"

more...?

In this example the name provides the context describing where the information fits in the class' private data. I think AQ's representation only had Name and Value, but if users can extend the structure and add type (or other) information we'll have an additional level of flexibility that otherwise would not be available.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.