What Daniel is saying is that even if the model mandates one data type per attribute (no mixed data-types), the calling application may wish to populate an attribute's value using data which the CP must convert on the way in (when writing them). Then I read you saying: "and don't forget about that application also wanting the CP to do a reverse conversion on the way out (when reading them)"
>>> "Tom Doman" <tdoman@xxxxxxxxxx> 01/15/08 2:41 PM >>>
Jim, that's the scenario I thought we were talking about. Shall you add these methods? Not if we don't need mixed attribute types, right? Is that question resolved?
>>> "Jim Sermersheim" <jimse@xxxxxxxxxx> 01/15/08 2:21 PM >>>
I'm getting lost. Someone please confirm for me that this is the scenario where an application is using two context providers, is making its own assumptions about the data types for an attribute that happens to be found in both contexts, and furthermore, is using the less common ITypedValue.getData() rather than iTypedValue.getLexical() to read the data from the attribute values. And it wants to make sure that both CP's will return the same type of object. Phew.
If that's the case, I agree with what I think Tom is saying -- we're in trouble unless we add ITypedValue.getData(URI asDataType). Furthermore, we probably would want to add a similar ITypedValue.getLexical(URI asDataType) (as well as one for getCanonical).
I guess it's good that we're discussing use cases this esoteric, but I wonder if these stars are going to line up in reality. Shall I add an enhancement request to add these methods?
>>> "Tom Doman" <tdoman@xxxxxxxxxx> 01/15/08 11:01 AM >>>
With respect to data conversion. The application may be freed from knowing anything about the underlying types by virtue of the Context Provider doing conversion on it's behalf. However, that is not a freedom the application will have for data returned from a Context Provider. If the application has to interpret the data returned in order to do something meaningful with it, it will have to know how to do that conversion itself, or we'll have to provide an IdAS API that the Context Provider can implement given a desired destination type and the application would have to hope that conversion was supported.
>>> "Daniel Sanders" <dsanders@xxxxxxxxxx> 01/14/08 5:51 PM >>>
I don't think the API discussion should center only around whether or not an attribute's values are always the same data type. The fact that a data type parameter is passed into the API does not necessarily mean that we intended to allow an attribute to have multiple values with different data types. Context providers can still enforce the rule that all values for an attribute must be the same data type. I would argue that specifying the data type in the API does no harm, and may have some benefits - as noted below.
To date the argument seems to be that since the schema knows the data type for an attribute, there is no need to specify it in the API. Here are a couple of reasons for continuing to pass data type in the API:
1. Schema- less context provider. In this case, the context provider simply allows an application to build up digital subjects as needed, no checking of schema. A schema- less context provider could easily enforce that all of an attribute's values are all of the same data type, but the first attribute value added determines the data type for the attribute. The XML file context provider currently works this way.
2. Data conversion. This is a nice convenience for applications. Instead of having to convert from the data type they use to manipulate data to the attribute data type, they can simply pass in the data type they are using and have the context provider do the appropriate conversions. This could be particularly handy where an application is interacting with multiple different context providers, each of which uses a different data type for a particular attribute. The application may not actually know what the underlying data types are for an attribute in a given context provider. Instead, it relies on the underlying context provider to perform whatever data conversion is needed, if any.
One could argue that a context provider could determine the source value's language type (using something like "instanceof" in Java) and based on that do an appropriate conversion. But not all languages have a mechanism for determining the "type" (or class) of an object at runtime (C,C++ for example). If we ever do IdAS in one of these languages, we would need to explicitly declare an object's data type in order for the context provider to do proper data conversion. Furthermore, even if an object's type can be determined using an instanceof- like mechanism, it may still be insufficient to know what kind of conversion is needed. For example, in Java "instanceof" may reveal an Object to be a "java.lang.String", but if the internal data type is Base64EncodedString, a context provider may need to know if the incoming String was Base64EncodedString or HexEncodedString or OctalEncodedString, etc., etc. - information that may not conveyed by instanceof.
>>> "Jim Sermersheim" <jimse@xxxxxxxxxx> 1/14/2008 2:25 PM >>>
This is the thread for discussing what started with http://dev.eclipse.org/mhonarc/lists/higgins- dev/msg03722.html
So far, we have these inputs:
I understand that each of an attribute's values are always the same data type. I think in general this keeps things simple and is what's likely expected by new users. I also think that allowing mixed types will cause lots of head scratching when values are being compared for equivalence (since two equal values are not allowed on the same attribute.
Drummond assumed they would all be the same type.
Markus reports that HOWL doesn't enforce same- typed values, and that in fact, it can't.
Paul states that in the Higgins Data Model, they are all the same type, and is working to fix the HOWL.
Daniel would like to allow for different types.
Mike pointed at the ITU definition of GeneralName as an example of why we might want to allow different types. He (as well as Daniel) further notes that in solving this kind of example, it's best to retain the original type/value pairing -- otherwise you loose the original type.
The last comment was Jan 8 http://dev.eclipse.org/mhonarc/lists/higgins- dev/msg03736.html
So, the current expectation and understanding is that values are of the same type. There is some belief that allowing different types would be a good thing.
I feel compelled to address bug #190594 in terms of the way the data model is known/understood to behave today, and make adjustments to if if/when we decide to allow values to be varied in their data types.
Does anyone disagree with that? If not, I'll fix the bug as prescribed.
higgins- dev mailing list
higgins-dev mailing list