creating:jlists

Conjunction & Disjunction Lists

Chapter 9 of Crafting Interpreters covers control flow - if, while, for, and all that. TLA⁺ doesn't really have that, because TLA⁺ specifications aren't imperative programs. However, TLA⁺ does have something similar. When specifying a system's actions, we commonly need to do two things: write preconditions or "guards" controlling whether an action can be taken, and specify the set of possible next-state transitions. For the former we use conjunction (aka logical "and"), and for the latter we use disjunction (aka logical "or"). These operators are so ubiquitous in TLA⁺ that a special syntax was developed to ease their use: vertically-aligned conjunction & disjunction lists, henceforth called "jlists" for brevity. Here is what they look like:

op ==
  /\ A
  /\ B
  /\ \/ C
     \/ D

Here, /\ is conjunction and \/ is disjunction. If a set of conjuncts or disjuncts ("juncts") are vertically aligned (have the same start column) then those juncts are grouped together. So, we want to parse the above example as (/\ A B (\/ C D)), using our Expr.Variadic type. This is a very nice language feature, but parsing these lists properly is the greatest challenge we've yet faced. It is only a slight exaggeration to say the purpose of this entire tutorial series is to teach you how to parse these jlists. Let's begin!

A First Attempt

Before we increase the complexity of our parser, let's convince ourselves it is necessary. We will attempt to parse jlists using familiar recursive descent techniques, then see where it goes wrong. At first this seems like it should be possible! Our jlists resemble the set constructor {1, 2, 3} syntax in that they're started with a unique token (\/ or /\), then feature an unbounded number of expressions separated by delimiters - here, \/ or /\. There isn't really a terminating token but that does not seem to be an obstacle. So, try adding this to the primary() method of your Parser class, below the set constructor logic; it uses the same do/while method to parse the juncts as the set constructor parsing logic uses to parse the element expressions:

      return new Expr.Variadic(operator, elements);
    }
 
    if (match(AND, OR)) {
      Token op = previous();
      List<Expr> juncts = new ArrayList<>();
      do {
        juncts.add(expression());
      } while (matchBullet(op.type, op.column));
      return new Expr.Variadic(op, juncts);
    }
 
    throw error(peek(), "Expect expression.");
  }

Remember our special column field we added to the Token class way back in the scanning chapter? Now we put it to use in a new helper function for the Parser class, matchBullet(); think "bullet" as in "bullet points":

  private boolean matchBullet(TokenType op, int column) {
    if (peek().type == op && peek().column == column) {
      advance();
      return true;
    }
 
    return false;
  }

matchBullet() consumes a /\ or \/ token only if it is of the expected type and column. In this way our jlist parsing logic will only add another expression if it finds another vertically-aligned /\ or \/ token on the next line. This all... kind of works? You can parse a surprisingly broad set of jlists with just our simple logic! Try it out; here are some jlists that can now be parsed:

op ==
  /\ 1
  /\ 2
op ==
  \/ 1
  \/ 2
op ==
  /\ 1
  /\ \/ 2
     \/ 3

Make them as nested and wild as you like, our code will handle them! So are we done? Unfortunately not. Here's the proverbial wrench: TLA⁺ also allows \/ and /\ as infix operators. We skipped defining them in the expressions chapter where we learned how to parse operators of different precedences; let's add them to operators table in our Token class now:

private static final Operator[] operators = new Operator[] {
    new Operator(Fix.PREFIX,  NOT,        true,   4,  4 ),
    new Operator(Fix.PREFIX,  ENABLED,    false,  4,  15),
    new Operator(Fix.PREFIX,  MINUS,      true,   12, 12),
    new Operator(Fix.INFIX,   AND,        true,   3,  3 ),
    new Operator(Fix.INFIX,   OR,         true,   3,  3 ),
    new Operator(Fix.INFIX,   IN,         false,  5,  5 ),

Just like that, our jlist parsing code no longer works. matchBullet() never has a chance to match a /\ or \/ token, because they're eaten up by operatorExpression() first. So, a jlist like:

op ==
  /\ 1
  /\ 2
  /\ 3

is parsed as a jlist with a single conjunct, the expression 1 /\ 2 /\ 3. This is awful! How can we fix it?

Beyond Context-Free

The answer to our parsing problem is deceptively simple: before matching a token in check(), we first check to ensure it starts on a column to the right of the current jlist's column. If the token instead starts to the left or equal to the current jlist's column, the match fails and the token is not consumed - even if the token is what is expected by the caller to check()! If we aren't currently parsing a jlist then check() functions as normal.

For readers who have taken a computer science class in formal languages, alarm bells should be going off - changing the parse behavior depending on the current context is a big change! In theoretical terms, our parser is currently context-free: it doesn't matter what an expression() is being nested inside, it will be parsed the same no matter what. The context - or surrounding code being parsed - does not change how expression() behaves. However, if we want to parse jlists correctly we will need to break this limitation and move to the more powerful context-sensitive class of formal languages.

What does this change mean in practical terms? First, it means we cannot easily write the formal TLA⁺ grammar in Backus-Naur Form (BNF) as we learned to do in Chapter 5: Representing Code. Although grammar notation does exist for context-sensitive languages, it is unintuitive and more of a hindrance to langage implementers than a help. Thus formal TLA⁺ grammars exclude jlists and use BNF to define the non-jlist parts of the language, then use plain language to describe how jlists work. Second - returning to our parser implementation here - it means that our Parser class carries some additional mutable state in a class variable, and its methods change their behavior based on that state.

Ultimately the shape of this state is a stack of nested jlists. Each entry in the stack records the column of a jlist's vertical alignment. When we start parsing a new jlist, we push its column to this stack. When we finish parsing a jlist, we pop from the stack. The top of this stack is the "current" jlist context, and it modifies how check() works.

We'll be using Java's ArrayDeque class as a stack. It only needs to hold the jlist column. Define a new class variable near the top of the Parser class:

  private final List<Token> tokens;
  private int current = 0;
  private final boolean replMode;
  private final ArrayDeque<Integer> jlists = new ArrayDeque<>();

Import ArrayDeque at the top of Parser.java:

package tla;
 
import java.util.List;
import java.util.ArrayList;
import java.util.ArrayDeque;
 
import static tla.TokenType.*;
 
class Parser {

Now augment our jlist parsing logic in primary() to push to & pop from the jlists stack when beginning and ending a jlist:

    if (match(AND, OR)) {
      Token op = previous();
      jlists.push(op.column);
      List<Expr> juncts = new ArrayList<>();
      do {
        juncts.add(expression());
      } while (matchBullet(op.type, op.column));
      jlists.pop();
      return new Expr.Variadic(op, juncts);
    }

Then add this critical line to the check() helper, to always return false if we are inside a jlist and the next token is not to the right of the jlist's column:

  private boolean check(TokenType type) {
    if (isAtEnd()) return false;
    if (!jlists.isEmpty() && peek().column <= jlists.peek()) return false;
    return peek().type == type;
  }

This works! We can parse jlists again! It's a bit tricky to figure out how this changes things in practice, so let's take a close look at our infix operator parsing code in operatorExpression():

    Expr expr = operatorExpression(prec + 1);
    while ((op = matchOp(Fix.INFIX, prec)) != null) {
      Token operator = previous();
      Expr right = operatorExpression(op.highPrec + 1);
      expr = new Expr.Binary(expr, operator, right);
      if (!op.assoc) return expr;
    }

Consider what happens when trying to parse this TLA⁺ snippet:

op ==
  /\ 1
  /\ 2

The parser will:

  1. look for an expression following op ==
  2. find /\, push a new jlist to the stack, then look for an expression
  3. find 1 as the value of expr at the top of the infix op parsing loop
  4. call matchOp() with /\ as the next token to be looked at
  5. in matchOp(), ultimately call check() which returns false because of our new logic
  6. never enter the infix op parsing loop and return 1 to the jlist loop as a complete expression
  7. in the jlist loop, call matchBullet() to successfully consume /\

If the line we added in check() were not present, matchOp() would have matched the next /\ as just another infix operator token. But we pre-empted it! So now /\ and \/ tokens will only be parsed as infix operator tokens when they are to the right of the current jlist. Note that our matchBullet() helper was intentionally written not to use check() so that it is actually capable of matching & consuming the /\ or \/ token.

There is one other benefit we've unlocked. It isn't enough to parse valid jlists, we must also reject invalid ones! The TLA⁺ language specification requires that the parser reject attempts to defeat vertical alignment encapsulation by abusing delimiters like (/) or {/}. What this means is that inputs like the following should fail to parse:

op ==
  /\ 1
  /\ (2
)
  /\ 3

Indeed, our parser will detect an error here. The parentheses parsing logic will call consume() attempting to match the closing ), but within consume() the call to check() will always fail since ) is not to the right of the current jlist column. This gives rise to a parse error.

Error Recovery

Talk of parsing errors nicely segues us onto the topic of error recovery. Recall that on error, we call synchronize() in the Parser class and desperately churn through tokens looking for the start of an operator definition. Jlists complicate this a bit! What happens if an error occurs while parsing a jlist and we enter synchronize() carrying around a bunch of state in the jlists stack? Well, nonsensical things happen. To fix this we just wipe out our jlist stack at the top of synchronize():

  private void synchronize() {
    jlists.clear();
    advance();
 
    while (!isAtEnd()) {

Done. You've successfully parsed vertically-aligned conjunction & disjunction lists in TLA⁺! This puts you in rarified air. Only a handful of people in the world possess this knowledge, and now you are among them.

Evaluation

Now that we can parse jlists, let's interpret them. Similar to the logical operators covered in the book, conjunction lists short-circuit. That means conjuncts are evaluated in order and, if a single conjunct is false, evaluation immediately stops and returns false. In an odd contrast, disjunction lists do not short-circuit; this is because they are used to express nondeterminism, as you will learn several chapters from now.

Add conjunction list evaluation logic to visitVariadicExpr() in the Interpreter class, below the set constructor logic:

        return set;
      case AND:
        for (Expr conjunct : expr.parameters) {
          Object result = evaluate(conjunct);
          checkBooleanOperand(expr.operator, result);
          if (!(boolean)result) return false;
        }
        return true;
      default:
        // Unreachable.
        return null

Then add the disjunction list logic right below that:

        return true;
      case OR:
        boolean result = false;
        for (Expr disjunct : expr.parameters) {
          Object junctResult = evaluate(disjunct);
          checkBooleanOperand(expr.operator, junctResult);
          result |= (Boolean)junctResult;
        }
        return result;
      default:
        // Unreachable.
        return null;

Remember we also parsed the /\ and \/ infix operators, so we need to evaluate those as well. This is a bit annoying! They should function in the exact same way as their respective jlists, but now we have to copy a duplicate of our evaluation logic to visitBinaryExpr(). So, we'll perform a trick which also has its parallel in chapter 9 of the book: desugaring! We will flatten infix /\ and \/ operators so they actually form a jlist.

In the Parser class, modify the infix operator parsing loop in operatorExpression() by replacing the expr = line with a call to a new helper, flattenInfix():

    Expr expr = operatorExpression(prec + 1);
    while ((op = matchOp(Fix.INFIX, prec)) != null) {
      Token operator = previous();
      Expr right = operatorExpression(op.highPrec + 1);
      expr = flattenInfix(expr, operator, right);
      if (!op.assoc) return expr;
    }

Define flattenInfix() above matchBullet() in the Parser class:

  private Expr flattenInfix(Expr left, Token op, Expr right) {
    if (op.type == AND) {
 
    } else if (op.type == OR) {
 
    } else {
      return new Expr.Binary(left, op, right);
    }
  }

The helper returns a regular binary expression except when the operator is AND or OR; in those cases, we return two-parameter instances of Expr.Variadic instead:

  private Expr flattenInfix(Expr left, Token op, Expr right) {
    if (op.type == AND) {
      List<Expr> conjuncts = new ArrayList<>();
      conjuncts.add(left);
      conjuncts.add(right);
      return new Expr.Variadic(op, conjuncts);    
    } else if (op.type == OR) {
      List<Expr> disjuncts = new ArrayList<>();
      disjuncts.add(left);
      disjuncts.add(right);
      return new Expr.Variadic(op, disjuncts);
    } else {
      return new Expr.Binary(left, op, right);
    }
  }

Further Desugaring

This is all right, but it could be even better! An expression like 1 /\ 2 /\ 3 /\ 4 will be translated to:

/\ /\ /\ 1
      /\ 2
   /\ 3
/\ 4

Which is quite a lot of nesting! It would be nice if it were instead translated to a single flat jlist with four conjuncts. This rewrite is safe because conjunction & disjunction are associative. So, define a new flattenJLists() helper which accepts a list of juncts, then builds a new jlist by flattening any nested jlists inside the juncts if they're the same type as the containing jlist. Here's what it looks like:

  private Expr flattenJLists(Token op, List<Expr> juncts) {
    List<Expr> flattened = new ArrayList<>();
    for (Expr junct : juncts) {
      Expr.Variadic vjunct;
      if ((vjunct = asVariadicOp(op, junct)) != null) {
        flattened.addAll(vjunct.parameters);
      } else {
        flattened.add(junct);
      }
    }
 
    return new Expr.Variadic(op, flattened);
  }

This uses Java's conditional-assign syntax along with the asVariadicOp() helper, which returns an Expr.Variadic instance if the given expression is a jlist of the given type:

  private Expr.Variadic asVariadicOp(Token op, Expr expr) {
    if (expr instanceof Expr.Variadic) {
      Expr.Variadic vExpr = (Expr.Variadic)expr;
      if (vExpr.operator.type == op.type) return vExpr;
    }
 
    return null;
  }

Replace the calls to new Expr.Variadic() in flattenInfix() with calls to flattenJLists() instead:

  private Expr flattenInfix(Expr left, Token op, Expr right) {
    if (op.type == AND) {
      List<Expr> conjuncts = new ArrayList<>();
      conjuncts.add(left);
      conjuncts.add(right);
      return flattenJLists(op, conjuncts);
    } else if (op.type == OR) {
      List<Expr> disjuncts = new ArrayList<>();
      disjuncts.add(left);
      disjuncts.add(right);
      return flattenJLists(op, disjuncts);
    } else {
      return new Expr.Binary(left, op, right);
    }
  }

Do the same in the jlist parsing loop in primary():

    if (match(AND, OR)) {
      Token op = previous();
      jlists.push(op.column);
      List<Expr> juncts = new ArrayList<>();
      do {
        juncts.add(expression());
      } while (matchBullet(op.type, op.column));
      jlists.pop();
      return flattenJLists(op, juncts);
    }

Infix /\ and \/ operators will now be interpreted by our jlist evaluation code, and jlists will be as flat as they possibly can be. So concludes this lynchpin tutorial on conjunction & disjunction lists. Good job making it to the end! If your code got out of sync during this tutorial, you can find its expected state here. Continue on the next page, where we learn to parse operators with parameters!

Challenges

Here are a number of optional challenges, in roughly increasing levels of difficulty.

  1. Python uses indentation to determine statement membership in a code block. Does this make Python context-sensitive?
  2. It's tempting to summarize this chapter as us solving the jlist parsing problem by making jlists have higher precedence than infix operators, but that is not quite the case. Think carefully about what precedence means; is there a difference between what might be called lexical precedence - where one interpretation of a token takes higher precedence than another - and parsing precedence? Did we make use of that here? What are some ways that parsers can deal with the problem of the same token having multiple possible meanings?
  3. Jlists are not the only context-sensitive language construct in TLA⁺. Nested proof steps are another. Take some time to read the TLA⁺ 2 language spec (pdf) and think about what makes proof syntax context-sensitive and how you might parse it if you had to.
  4. Write unit tests for your jlist parsing code. Think of every weird jlist case you can. Look at this standard test file for inspiration.
  5. If you are familiar with the pumping lemma for context-free languages, use it to prove that TLA⁺ with jlists is not context-free.
  6. Most courses in formal languages skip directly from context-free grammars to Turing machines, but this misses a number of automata of intermediate power. See whether it is possible to use context-free grammars with storage to concisely express a formal grammar for TLA⁺ that includes jlists. The viability of this is unknown; share your results with the mailing list if accomplished!

< Previous Page | Table of Contents | Next Page >

  • creating/jlists.txt
  • Last modified: 2025/05/21 18:21
  • by ahelwer