Demo for plausible reasoning and argumentation

This is a demo of plausible reasoning based upon how people generally reason in everyday life, and in contrast to formal reasoning with deductive logic, and statistical approaches such as Bayesian inference. Plausible reasoning has huge potential for human-machine collaboration through enabling computers to analyse, explain, justify, expand upon, and offer sound argumentation.

If you have any questions, feel free to email Dave Raggett <dsr@w3.org>

This work has been supported through funding for the TERMINET project from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 957406.

Examples

Note that this is work in progress and subject to change. This demo includes a variety of examples involving reasoning with properties, relationships, implications and analogies. Short term plans include work on integrating qualitative computation of likelihood, richer modelling of value domains for fuzzy sets, and flexible handling of quantifiers, comparisons and qualitative modifiers, along with a suite of related examples. Long term plans include work on causal reasoning, support for combining System 1 and 2 cognition, including metacognition, integration of natural language processing, and performance scaling, e.g. using spreading activation.

The trace below shows the explanation of reasoning proceeding from the facts to the premise. The inference engine itself works backwards from the premise to the facts, and the explanation is subsequently generated from the trace of execution.

Use the drop-down menu below to select which query to reason about. Use the effort checkbox to seek indirect evidence even when direct evidence is found, and the trace checkbox to see reasoning in action in addition to the explanation generated from it.

. .



Plausible Knowledge graph:




Introduction

Plausible reasoning deals with imperfect knowledge where it is often impractical to apply formal logic or statistical approaches due to incomplete, uncertain and inconsistent knowledge, something that is frequently the case in everyday life. Plausible reasoning has been studied since the days of Ancient Greece, e.g. Carneades and his guidelines for argumentation. Today, it is widely used in court cases, where the prosecution and defence make plausible arguments for and against the innocence of the accused. The judge decides what evidence is admissible, and the jury assesses the arguments. Plausible reasoning can be used with causal models to provide explanations or predictions, and is also an important part of natural language understanding, and learning from limited evidence.

Plausible reasoning moves from plausible premises to conclusions that are less plausible, but nonetheless rational, and is based on the way things generally go in familiar situations. Plausibility is reinforced when listeners have examples in their own minds. Examples can be used to confirm or refute reasoning. Questions can be used to probe reasoning at a deeper level, as well as to seek evidence that strengthens or weakens the argument.

Plausible reasoning can also be used to support belief revision where an agent learns something new that extends or revises its existing knowledge. This is a common activity for young children as they refine their model of the world to better match reality. This includes learning taxonomic knowledge across a sequence of episodes by considering similarities and dissimilarities. Causal knowledge can be learned by observing correlations, and testing hypotheses via experiments. Wild guesses early on give way to cautious tweaks as knowledge matures.

The quest for mathematically satisfying theories of reasoning has focused attention on formal semantics and deductive logic. Consider A B, which means if A is true then B is true. If A is false then B can be true or false. If B is true, we still can't be sure that A is true, but if B is false then A must be false.

We could make this implication more specific, e.g. if it is raining then it is cloudy. However, rain is more likely if it is cloudy, likewise, if it is not raining, then it might be sunny, so it is less likely that it is cloudy. This moves beyond certain knowledge to what is likely, based upon prior knowledge. This could be formalised in terms of Bayesian inference, but the required statistics are often unavailable. Plausible reasoning instead uses best guesses for a simple approximation, but these may well vary from one person to the next given their different lived experiences. In other words, plausible reasoning is the best you can do with imperfect knowledge.

Plausible reasoning can also be contrasted with qualitative reasoning about state transitions for physical systems, e.g. a boiling kettle on a stove, and fuzzy reasoning where systems are modelled as having a blend of different states at the same time, e.g. a mix of hot and cold.

This demo looks at how plausible reasoning can be applied to the kinds of flowers grown in different countries, based upon the use cases described in Collins and Michalski (1988), and drawing upon knowledge about plants, geography and the climate.

Collins and co-workers developed a core theory of plausible reasoning. They found that:

  1. There are several categories of inference rules that people commonly use to answer questions.
  2. People weigh the evidence that bears on a question, both for and against.
  3. People are more or less certain depending on the certainty of the premises, the certainty of the inferences and whether different inferences lead to the same or opposite conclusions.
  4. Facing a question for which there is an absence of directly applicable knowledge, people search for other knowledge that could help given applicable inferences.

Burstein, Collins and Baker (1991) give further details along with some extensions to the core theory, and note that a form of spreading activation can be used to control both the consideration of relevant inference rules (implications and dependencies) and the selection of useful analogs for purposes of induction and generalisation. They provide a summary of the core theory of Collins and Michalski. This has the following kinds of expressions:

Statements
birds' mode of locomotion includes flying
Relationships
bird is a generalisation of robin
chicken is a specialisation of fowl
grain is a specialisation of crop
rice is a specialisation of grain
duck is similar to goose in respect to birds' habitat
duck is dissimilar to goose in respect to birds' neck length
irrigation is similar to rainfall in respect to growing crops
Mutual implications
warm areas with heavy rainfall are suitable for growing rice
Mutual dependencies
the average temperature of an area is inversely related to its distance from the equator

Statements are used to express properties, e.g. an animal's modes of location, such as hopping, walking, slithering, flying and swimming. Taxonomic knowledge consists of a collection of statements and relationships that involve named concepts, and can be reduced to a graph model with vertices and labelled directed edges. Subgraphs can be used to model compound relationships that reference other concepts, e.g. duck is dissimilar to goose in respect to birds' neck length, which names neck length as a property of birds, which in turn is a generalisation of both ducks and geese.

Relationships can be used to infer new statements from existing statements. Generalisation moves up the hierarchy, specialisation moves down the hierarchy, whilst similarity and dissimilarity move sideways across the hierarchy. As an example, we can plausibly infer that robins (as a class) can fly given that we already know birds can fly and bird is a generalisation of robin.

Collins and Michalski define just four kinds of relationships, but a richer taxonomy will need many more. As an example, consider geography. Rather than modelling Surrey as a specialisation of England, we might want to express that Surrey is a county and a region within England. Likewise, we could express that England is a country that is part of the United Kingdom, which in turn is a sovereign state that is part of Europe.

If we already know that roses grow in England, and that yellow roses are a common variety of roses, then we can reasonably infer that yellow roses grow in England. This remains valid if we model flower colour as a property of the class rose, rather than defining the class of yellow roses as a specialisation of the class rose. Inferences can thus be applied to properties instead of classes. The use of properties helps to avoid a combinatorial explosion in the number of classes in the taxonomy.

Mutual implications are a form of if-then rules, e.g. if an area is warm and has heavy rainfall then it is suitable for growing rice. We may find a countervailing argument, e.g. growing rice requires fertile soils, and this area has soils with low fertility. As such, rules model plausible conclusions rather than absolute truth. Implications can also be used in reverse. If we know that an area is used for growing rice, we can use the first rule to infer that it is likely that the areas is warm and has heavy rainfall. The conditional likelihood may be different when the implication is used to reason forward or backward.

Mutual dependencies describe how one property depends on another. At its simplest, this can be modelled as either having a positive or negative correlation, or declaring that the two properties are independent, so that the value of one has no effect on the other. Like implications, dependencies can be used to reason forward or backward.

An open question is how to model quantification. Traditional logic has two operators: the universal quantifier x (for all x), and the existential quantifier x (there exists at least one x). Human languages are more flexible with quantifiers such as few and many, e.g. few people live to be 100 years old, but many live to be over 20. Quantification may also involve contexts that constrain the domain for quantified variables. A common case is where we want to reason over what's involved in a particular real or imagined situation rather than what is true in general.

Another open question is how to model the certainty of beliefs. One way is to use a numeric range (e.g. 0 to 1), and another is to use a combination of one or more symbols, e.g. "somewhat likely". Human languages, by and large, use symbols, so it makes sense to try to do the same. Difficulties can arise when comparing two certainty values that use different symbols. How do people manage that? This important in respect to curtailing inference, and weighing up alternative arguments.

Facts and rules may have exceptions, e.g. a given fact such as birds can fly may not apply to a subclass of birds. Likewise, commonsense rules may have many potential exceptions. This is a consequence of imperfect knowledge, and can motivate further analysis/discussion when appropriate, e.g. why some species of birds have lost their ability to fly. Realising you are experiencing cognitive dissonance, i.e. being aware of having inconsistent thoughts, beliefs or attitudes, is a prelude to taking actions to restructure your knowlege to better fit reality, starting by asking questions.

In some situations it is reasonable to assume a premise is false if you do not have any evidence for it to be true. One way to approach this is to annotate a taxonomy to indicate that a given range is a closed set. Another way is to consider the likelihood of evidence being available, so that the absence of evidence can be argued as implying that the premise in question is false. We may discount evidence if the source of that evidence has been discredited, e.g. we believe someone to be a liar.

Collins and Michalski talk about a number of parameters and how these relate to inferences involving taxonomic hierarchies. The parameters are: typicality, similarity, conditional likelihood, frequency, dominance and multiplicity. The following attempts to clarify the descriptions given in the core theory. The parameter value is a symbol, e.g. low or high, but in principle could be given as a number. Parameters may require additional attributes to identity the group the parameter refers to.

These can be stated directly or estimated from analysis of the taxonomy. Collins and Michalski also mention an implicit attribute ("+", "0" or "-") that indicates whether high values of the parameter it describes, increases, has no effect or decreases the certainty of the inference.

The above covers inferences based upon traversing taxonomic relationships or using explicit rules for implications and dependencies. Other kinds of inferences can be based on considering analogies by comparing the current case with others that share a common pattern. A separate demo will explore use cases that illustrate how cognitive agents can learn inductively from a sequence of examples.

A working implementation of plausible reasoning requires a concrete means to represent knowledge, and a processing model for operating on it. This demo distinguishes between taxonomic knowledge and inference rules. You can expand and contract the view of the knowledge graph above. The representation as chunks is briefly discussed at the end of this document.

We want to implement cognitive systems that mimic how people weigh the evidence that bears on a question, both for and against. The processing model starts from a query that expresses a question as a premise. It then successively looks for relevant inferences and selects one of these to apply on each iteration. Each inference generates a new statement, records the rule used to produce it, and appends a summary to the trace. An open question is how to keep track of arguments and countervailing arguments, and to separate these from the accepted facts.

In a court setting, the prosecution first sets out their case. This is then followed by the defence which attempts to cast doubt on the prosecution's case by attacking weak links in the evidence and the chain of inferences. A further challenge is how to act in the role of judge and jury in respect to weighing up the evidence and arguments presented by the prosecution and the defence. Human users of cognitive systems can be expected to want summaries rather than having to weigh all of the arguments themselves. If the users are unsure, they can ask clarifying questions.

Plausible Knowledge Notation (PKN)

This section describes a simple notation for expressing plausible knowledge in terms of four kinds of statements that are based upon the core theory and its extensions.

Property Statements

Property statements describe the properties of concepts. Here is an example:

flowers of England includes daffodils, roses (certainty high)

where flowers is the name of property for the concept named England. The property value follows the operator. In this case the value is daffodils and roses. The includes operator signifies that this is an open set, i.e. that there could be other kinds of flowers. Use excludes when you want to assert that the given referent values are not included. The round brackets are used to declare the value of parameters as a comma separated list of name/value pairs. The above example defines a single parameter certainty with the value high. Values are given from qualitative ranges, e.g. low, medium and high. Parameters are optional and their usage will depend on the kind of statement.

Note that Collins uses descriptors when referring to properties, arguments for concepts, and referents for property values.

Relation Statements

Relation statements describe relationships between things, and may include a list of property names to indicate their scope. Here is a couple of examples:

robin kind-of song-bird
duck similar-to goose for habitat

This states that robins are a sub-class of song birds, and that ducks are similar to geese in respect to their habitat. In this case there is a single property for the scope, but you also use a comma separated list when the scope applies to several properties. Like property statements, relation statements may end with an optional parameter list.

Relations may also be used for properties that have referents without a descriptor, e.g.

yacht is large for sailing-boat
dingy is small for sailing-boat

Implication Statements

Implication statements describe if-then relationships between concepts. Here is an example:

temperature of ?place is warm and
rainfall of ?place is heavy
   implies grain of ?place includes rice
The left of the implies keyword gives a conjunction of property values as the rule's antecedent. In this example, we have a pair of conditions. The first requires that the temperature of the place must have the value warm. The second requires that the rainfall must be heavy. Each condition has the same syntax as for properties. Conditions are separated with and, and variables are prefixed with "?". Both conditions and actions can use the same set of operators as for property statements. The right side of the rule is the consequent, and has the same syntax as rule antecedents. An optional parameter list is only permitted at the end of implication statements.

Dependency Statements

Dependency statements describe a coupling between property values, and may have parameter list at their end. Here is an example:

pressure increases-with depth

which expresses the idea that the pressure increases as the depth increases. The use of a relationship here assumes that the two properties apply to the same argument. If they don't you will need to specify the two arguments, e.g.

pressure of ?air decreases-with altitude of ?aeroplane

Another way to express dependencies is to use the depends-on relationship, e.g.

crop depends-on climate
crop depends-on rainfall

where the statements relate descriptors and may be associated with sets of implication statements that give a more detailed account, e.g.

climate depends-on latitude

latitude of ?place is low implies climate of ?place is hot
latitude of ?place is medium implies climate of ?place is temperate
latitude of ?place is high implies climate of ?place is cold

An open question is how to restrict variables in implications and dependencies. For now, no such restrictions are assumed. In the above examples, ?place is a variable that could be reasonably taken to be restricted to concepts that are instances of places. In principle, this could involve reasoning about class hierarchies. Any such restrictions should be motivated by natural language semantics, e.g. few and many, as well as definite and indefinite pronouns, e.g. the house vs a house.

Whilst property and relation statements are sufficient to express taxonomies, further work is needed to consider how to express the richer models possible with OWL and SHACL.

Context

Human knowledge is often specific to a particular context, e.g. a past episode, the current situation, and imaginary situations, e.g. when thinking about what might happen, seeking explanations for what has happened, or when telling a story. Another case is when modelling the knowledge of others as part of a theory of mind. Contexts can be identified in PKN using the context parameter.

Queries

The simplest kind of queries ask for the value of a given property, e.g. what is the climate of England?

climate of England

You can if there is evidence for a given value, e.g. is the climate temperate?

climate of England is temperate

A more complicated example introduces variables and conjunctions, e.g. are yellow roses found in England?

flower of England includes ?flower &
    ?flower kind-of rose &
    colour of ?flower includes yellow

Further work is planned on quantifiers and comparisons to support queries like the following:

Collaborative cognitive agents need to be smart in how they respond to queries, e.g. a simple yes/no answer may not be seen as particularly helpful. Long lists are likewise awkward, and it is better to just give a few pertinent examples – but how to select them from the longer list? The cognitive agent should be prepared to respond by asking questions that clarify what’s wanted, and likewise, users may themselves ask questions and raise points that relate to the task at hand. This may lead to revisions to the beliefs represented in the knowledge graph to better model reality.

This points to opportunities around natural language dialogue, and allowing users to ask for advice, explanations etc. Natural language generation should address Grice's maxims of conversation: quantity, quality, relation and manner.

Collins' Core Theory

This section presents examples from the papers by Collins and his colleagues. We start by considering examples from Burstein et al.

Trace of reasoning:

Here is a trace for the query asking if England has a temperate climate:

(? climate :of England := temperate)

using an inheritance transform
   since England = partof(Europe) (dominance low)
   and Europe has climate = temperate (certainty low)
   conclude that climate(England) = temperate
      is true with medium certainty
      
using an implication transform:
   since latitude = second quadrant or third quadrant (dominance low)
      ==> climate = temperate
   and latitude(England) = third quadrant
   conclude that climate(England) = temperate
      is true with medium certainty
      
trying argument based dependency transforms ...
   using a similar transform
   since latitude <==> climate
   and Holland is similar to England with
      respect to latitude (sim 1.0)
   and climate(Holland) = temperate
   conclude that climate(England) = temperate
      is true with medium certainty.
      
trying referent based dependency transforms ...
   insufficient information available.
   
evidence suggests climate(England) = temperate
      is true with high certainty

The agent looks for different lines of reasoning that support the premise that England has a temperate climate.

The agent first uses the relationship that England is a part of Europe to seek evidence for England's climate. Europe has a range of climates, including temperate, so this is weak evidence that England's climate may be temperate.

The agent finds another rule that holds that a region's latitude implies its climate. The knowledgebase models latitude using symbols that divide the value space into quadrants. The rule's condition expects latitude to be in either the second or third quadrant. This condition evaluates to true as England's latitude is known to be in the third quadrant. The rule then impies that England's climate is temperate.

The agent then looks at yet another way to infer the climate by comparing England with Holland which has the same latitude. Holland has a temperate climate, so England should be the same. With three different lines of reasoning providing evidence for the same climate, the agent has a high certainty that England's climate is temperate.

Here is a trace for the query asking if coffee is grown in Llanos:

(? crop :of Llanos := coffee)

no direct evidence found

trying negative implication from
   crop - coffee ==> rainfall - high (certainty 0.8)
   since high is not a known value for
      rainfall(Llanos) and its set of values is closed
   conclude that coffee is not a value 
      for rainfall(Llanos) with medium certainty.
     
trying argument based dependency transforms ...
   Llanos and Sao Paulo match on climate (aim -0.8)
   Llanos and Sao Paulo match on vegetation (aim -0.6)
   using a similar transform:
   since climate and vegetation <==> crop
   and Sao Paulo is similar to Llanos with respect
      to climate and vegetation (aim -0.7)
   and crop(Sao Paulo) - coffee
   conclude that crop(Sao Paulo) - coffee is true
      with medium certainty.
      
evidence is evenly mixed, no judgement is possible

This starts by looking for a statement that shows that Llanos has coffee as one of its crops. That fails, so the agent then looks at indirect sources of information that would help to either rule out coffee or to support the premise that coffee is a crop for Llanos.

The agent uses its knowledge that coffee requires high rainfall and looks at the rainfall for Llanos. It doesn't find high as a value, and given that this is a closed set of values, it concludes that coffee isn't a crop for Llanos with a medium certainty.

The agent then looks for rules that have a bearing on crops, and notes that Llanos and Sao Paul match on climate and vegetation. An inference rule suggests that if climate and vegetation match, then we can expect the same crops to grow in both regions. Given that So Paulo has coffee as a crop, the agent concludes with medium certainty that it should be a crop for Llanos too.

With evidence both for and against the premise in question, the agent notes that the evidence is evenly mixed, so no judgement is possible.

Here is a trace for the query asking if tulips are grown in Venezuela:

(? flower-type :of Venezuela := tulip)

trying argument based dependency transforms ...
   using a dissimilar transform
   since climate <--> flower-Type
   and Holland is dissimilar to Venezuela
      with respect to climate (similarity -1.0)
   and flower-type(Holland) - tulip
   conclude that flower-type(Venezuela) - tulip
      is false with low certainty.
      
trying referent based transforms ...
   using a dissimilar transform
   since climate <==> grows-in
   and bougainvillea is dissimilar to tulip
      with respect to climate (similarity -1.0)
   and grows-in(Venezuela) - bougainvillea
   conclude that grows-in(Venezuela) - tulip
      is false with low certainty
      
evidence suggests that tulip is not flower-type(Venezuela)
with medium certainty

Flower-type maps from places to flowers that grow there. The inverse relationship is named grows-in and maps from flowers to the places they grow in. By declaring the name of the inverse relationship, the agent can deduce either relationship from the other. Another example is generalisation as the inverse of specialisation.

In this example, the agent first finds a place where tulips are grown (Holland) and compares that place to Venezuela. The climates don't match, so it is inappropriate to use the dependency between climate and flower type to conclude that tulips grow in Venezuela.

The agent then seeks evidence that there are flowers grown in Venezuela that are similar to tulips with respect the factors that effect flower growth (e.g. climate and rainfall). In this case, we see that bougainvillea is dissimilar to tulip with respect to climate, and knowing that bougainvillea grow in Venezuela, we can conclude that tulips won't grow well in Venezuela.

Queries:

(? crop :of Llanos :- coffee)
(? climate :of England :- temperate)
(? flower-type :of Holland :- rose)
(? water-requirement :of rose :- high)

These define queries in terms of a property value for a named concept. It looks as if you could give multiple values when needed, but I am unsure of the syntax would be.

Properties:

climate(Africa)-temperate, freq=3, cert-.9
climate(Africa)-tropical, freq=0.5, cert=high
climate(England)-temperate, cert=medium
flower-type(Holland)={daffodils, roses, ...}
flower-type(Brazil)^={daffodils, roses, ...}

freq is presumably the frequency parameter, and accepts a number, but the precise meaning is unclear, along with its range. cert presumably the certainty parameter, and accepts a number or a symbol. My guess is that the number is in the range 0 to 1, whilst the symbol is in the set: low, medium and high. It is unclear what the significance is for "-" versis "=".

What is the handling of properties with single values versus those with multiple values? The first two statements above suggest that the climate property for Africa is multi-valued, and that each value has its own parameters. The last two examples show a syntax with multiple values given in a single statement. The very last statement declares that flower-type for Brazil excludes daffodils and roses. The three dots signifies an open set with implications for the precise meaning of "=" and "^=". In this example the statements cover daffodils and roses as types of flowers, but there could be many others.

Mutual implications:

temperature(place) = warm & rainfall(place) = heavy
   <===> grain(place) = rice

Both sides of the implication are constraints on statements. The example includes a conjunction of constraints, and in principle this could be generalised to allow other Boolean operators such disjunctions and negation. This example implies that warm places with heavy rainfall are suitable for growing rice as a kind of grain. I assume that place is implicitly a variable that is scoped to the implication and is constrained to a concept representing some kind of place. Implications can be annotated with parameters such as conditional likelihood.

Mutual dependencies:

                             -
average temperature(place) <---> latitude(place)

This signifies that the average temperature of a place is inversely related to its latitude. Another rule could use "+" instead of "-" when the two properties are directly related. Dependencies can be annotated with parameters such as conditional likelihood.

Further considerations

This demo introduces a simple notation for statements as a means to avoid the inconsistencies seen across examples in the papers by Collins and his colleagues. A javascript library is used to parse the statements and construct corresponding objects for use by the inference engine. Further extensions are anticipated as experience is gained. One opportunity is to consider part-whole relationships in respect to similarity and cardinality, e.g. cars and trucks often have the same number of wheels. Another opportunity is to consider quantification and scoping to real or imagined situations.

An open question is the relationship to the chunks and rules notation, which is loosely inspired by John Anderson's ACT-R. We would like to clarify how plausible reasoning can combine a mix of rapid unconscious processing, and slower deliberate conscious reasoning, corresponding to the distinction between Daniel Kahneman's System 1 and System 2 thinking.

It is also worth considering the relationship to RDF and Property Graphs as common frameworks for knowledge graphs. RDF is based upon labelled directed graph edges, also known as triples. Property statements can be mapped to triples where the subject is the concept, the predicate is the property, and the object is the property value. This gets more complicated when you consider value lists and parameters. Labelled Property Graphs allow you to specify properties for both nodes and links. A node thus corresponds to the set of property statements for the same concept, whilst links correspond to relation statements, which themselves can be cited by property statements. As such we can say that plausible knowledge graphs are a generalisation of both RDF and Labelled Property Graphs.

A further consideration is the requirement to express rules for updating the knowledge base. It would be interesting to be able to serialise knowledge bases that contain a mix of regular chunks and chunks that model plausible inferences. As such it makes sense to support a mix of both notations, which therefore need to be aligned to avoid any ambiguities.

In the core theory, property statements have the form d(a) = r, where d is the descriptor, a is the argument and r is the referent.

Burstein extends the core theory to allow for comparative operators, such as ">=", in place of "=". This suggests that we could internally model property statements as chunks.

In Collin's theory, relational statements take the form:

a1 REL a2 in CX (A, d)
where REL is GEN, SPEC, SIM or DIS
e.g. duck SIM goose in CX (bird, habitat)

It is unclear why the context needs the argument (bird in the above example) as the descriptor would apply to the arguments in the relation (i.e. duck and goose).

We could internally model relational statements as chunks that name the left and right arguments, and use additional properties for the context and parameters such as conditional likelihood.

A further possibility is to use chunk types in place of GEN and SPEC, e.g.

bird duck {
	habitat wetlands
	neck-length medium
	call quack
}

which indicates that ducks are a specialisation of birds, and also embodies several statements about descriptors for ducks. GEN is the inverse of SPEC, and as such GEN can be considered as redundant.

The above is convenient when defining a taxonomy, as it effectively groups all of the statements for a given argument as well as the SPEC and GEN relations. We would need an extension to the Chunks notation to cover parameters, e.g. to indicate the level of certainty for a given property value. One way to do that would be with round brackets following the property value. Parameters for the GEN/SPEC relations could be given as chunk properties, using "@" as a prefix to keep them distinct from the names of referents.

You could give a list of values if you want to describe a range when appropriate. An open question is how to distinguish typical values from ranges. One possibility is to annotate each value in a range with its frequency. In other words, to indicate which values are commonplace and which are infrequent.

A further challenge is how to relate qualitative and quantitive measures, e.g. for a person's height, you could describe someone as tall, rather than stating they are 1.8m in height. For this we need to record additional information, e.g. by providing a set of exemplars along with their typicality.

You can also model instances of a category, e.g. "Henrietta" is a specific individual of the category of farmyard ducks, and has a particular colour, gender and so forth. We can either express this using the @isa property or using a separate Link chunk with chunk type @isa. The interpretation of chunk type is left to the rules that act over the chunks rather than having a fixed formal semantics. In the following example, @isa takes precedence over the chunk type.

duck Henrietta {
	@isa farmyard-duck
	colour white
	gender female
}