Skip to main content
Ctrl+K

Project Concept documentation

  • Home Page
  • Jupyter Tutorials
  • API Reference
  • Home Page
  • Jupyter Tutorials
  • API Reference

Section Navigation

  • Defining Domain-Specific Languages

    • Tutorial 1.1: Defining Types and Functions in a Domain-Specific Language
    • Tutorial 1.2: Executing Programs in a Domain-Specific Language
    • Tutorial 1.3: Using Tensor-Typed Value Objects and States in a DSL
    • Tutorial 1.4: Use Enumerative Search to Learn a Function
    • Tutorial 1.5: Using Python Syntax to Write FOL Expressions
  • Defining and Using Combinatory Categorial Grammars

    • Tutorial 2.1: Basic Definition of a Combinatory Categorial Grammar
    • Tutorial 2.2: Learning Lexicon Entries in a Combinatory Categorial Grammar
    • Tutorial 2.3: Basics of Neuro-Symbolic Combinatory Categorial Grammar
    • Tutorial 2.4: Learning Lexicon Weights of NCCGs
  • Defining and Using PDSketch for Planning

    • Tutorial 3.1: Basic Definition of a Planning Domain and Discrete Search
    • Tutorial 3.2: Solving Mixed Discrete-Continuous Planning with Optimistic Search
    • Tutorial 3.3: Translate a PDSketch Planning Problem into STRIPS
    • Tutorial 3.4: Doing PDSketch with STRIPS-Style Heuristics
    • Tutorial 3.5: Advanced Features in PDSketch
  • Relational Features and Relational Neural Networks

    • Tutorial 4.1: Relational Representations and Inductive Learning
  • Neural Symbolic Concept Learning

    • Tutorial 5.1: Logic-Enhanced Foundation Models (LEFT)
  • Tutorials
  • Tutorial...

Tutorial 3.1: Basic Definition of a Planning Domain and Discrete Search#

[1]:
import concepts.dm.pdsketch as pds
[2]:
domain_string = r"""(define (domain blocks-wold)
    (:types block)
    (:predicates
        (clear ?x - block)          ;; no block is on x
        (on ?x - block ?y - block)  ;; x is on y
        (robot-holding ?x - block)  ;; the robot is holding x
        (robot-handfree)            ;; the robot is not holding anything
    )
    (:action pick
     :parameters (?x - block)
     :precondition (and (robot-handfree) (clear ?x))
     :effect (and (not (robot-handfree)) (robot-holding ?x) (not (clear ?x)))
    )
    (:action place
     :parameters (?x - block ?y - block)
     :precondition (and (robot-holding ?x) (clear ?y))
     :effect (and (robot-handfree) (not (robot-holding ?x)) (not (clear ?y)) (clear ?x) (on ?x ?y))
    )
)"""
[3]:
domain = pds.load_domain_string(domain_string)
domain.print_summary()
Domain blocks-wold
  Types: dict{
    block: block
  }
  Functions: dict{
    clear: clear[observation, state, cacheable](?x: block) -> bool
    on: on[observation, state, cacheable](?x: block, ?y: block) -> bool
    robot-handfree: robot-handfree[observation, state, cacheable]() -> bool
    robot-holding: robot-holding[observation, state, cacheable](?x: block) -> bool
  }
  External Functions: dict{
  }
  Generators: dict{
  }
  Fancy Generators: dict{
  }
  Operators:
    (:action pick
     :parameters (?x: block)
     :precondition (and
       robot-handfree()
       clear(V::?x)
     )
     :effect (and
       assign(robot-handfree(): Const::0)
       assign(robot-holding(V::?x): Const::1)
       assign(clear(V::?x): Const::0)
     )
    )
    (:action place
     :parameters (?x: block ?y: block)
     :precondition (and
       robot-holding(V::?x)
       clear(V::?y)
     )
     :effect (and
       assign(robot-handfree(): Const::1)
       assign(robot-holding(V::?x): Const::0)
       assign(clear(V::?y): Const::0)
       assign(clear(V::?x): Const::1)
       assign(on(V::?x, V::?y): Const::1)
     )
    )
  Axioms:
    <Empty>
  Regression Rules:
    <Empty>
[4]:
domain.functions['clear']
[4]:
Predicate<clear[observation, state, cacheable](?x: block) -> bool>
[5]:
goal_expr = domain.parse('(and (on a b) (on b c))')
goal_expr
[5]:
AndExpression<and(on(OBJ::a, OBJ::b), on(OBJ::b, OBJ::c))>
[6]:
executor = pds.PDSketchExecutor(domain)
executor
[6]:
<concepts.dm.pdsketch.executor.PDSketchExecutor at 0x28dfa1640>
[7]:
state, ctx = executor.new_state({'a': domain.types['block'], 'b': domain.types['block'], 'c': domain.types['block']}, create_context=True)
state
[7]:
State{
  states:
  objects: a - block, b - block, c - block
}
[8]:
ctx.define_predicates([
    ctx.robot_handfree(),
    ctx.clear('a'),
    ctx.clear('b'),
    ctx.clear('c')
])
state
[8]:
State{
  states:
    - robot-holding: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([0, 0, 0])}
    - on: Value[bool, axes=[?x, ?y], tdtype=torch.int64, tdshape=(3, 3), quantized]{
      tensor([[0, 0, 0],
              [0, 0, 0],
              [0, 0, 0]])
    }
    - robot-handfree: Value[bool, axes=[], tdtype=torch.int64, tdshape=(), quantized]{tensor(1)}
    - clear: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([1, 1, 1])}
  objects: a - block, b - block, c - block
}
[9]:
from concepts.dm.pdsketch.planners.discrete_search import brute_force_search, validate_plan
[10]:
# Run a brute-force search to find a solution
plan = brute_force_search(executor, state, goal_expr, verbose=True)
plan
bfs::actions nr 12
bfs::goal_expr and(on(OBJ::a, OBJ::b), on(OBJ::b, OBJ::c))
bfs::depth=0, states=3: : 12it [00:00, 3586.15it/s]
bfs::depth=1, states=6: : 36it [00:00, 9747.90it/s]
bfs::depth=2, states=12: : 72it [00:00, 16382.22it/s]
bfs::depth=3, states=9: : 144it [00:00, 21188.56it/s]
bfs::depth=4: 60it [00:00, 18749.68it/s]
bfs::search succeeded.
bfs::total_expansions: 28

[10]:
(OperatorApplier<action::pick(?x=b)>,
 OperatorApplier<action::place(?x=b, ?y=c)>,
 OperatorApplier<action::pick(?x=a)>,
 OperatorApplier<action::place(?x=a, ?y=b)>)
[11]:
# Use the built-in function validate_plan to simulate the plan.
final_state, succ = validate_plan(executor, state, goal_expr, plan)
print(final_state)
print(succ)
State{
  states:
    - robot-holding: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([0, 0, 0])}
    - on: Value[bool, axes=[?x, ?y], tdtype=torch.int64, tdshape=(3, 3), quantized]{
      tensor([[0, 1, 0],
              [0, 0, 1],
              [0, 0, 0]])
    }
    - robot-handfree: Value[bool, axes=[], tdtype=torch.int64, tdshape=(), quantized]{tensor(1)}
    - clear: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([1, 0, 0])}
  objects: a - block, b - block, c - block
}
Value[bool, axes=[], tdtype=torch.int64, tdshape=(), quantized]{tensor(1)}
[12]:
# Or you can execute the plan step by step and visualize.
s = state
for action in plan:
    succ, s = executor.apply(action, s)
    assert succ
    print(f'Applying: {action}')
    print(f'New state: {s}')
Applying: action::pick(?x=b)
New state: State{
  states:
    - robot-holding: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([0, 1, 0])}
    - on: Value[bool, axes=[?x, ?y], tdtype=torch.int64, tdshape=(3, 3), quantized]{
      tensor([[0, 0, 0],
              [0, 0, 0],
              [0, 0, 0]])
    }
    - robot-handfree: Value[bool, axes=[], tdtype=torch.int64, tdshape=(), quantized]{tensor(0)}
    - clear: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([1, 0, 1])}
  objects: a - block, b - block, c - block
}
Applying: action::place(?x=b, ?y=c)
New state: State{
  states:
    - robot-holding: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([0, 0, 0])}
    - on: Value[bool, axes=[?x, ?y], tdtype=torch.int64, tdshape=(3, 3), quantized]{
      tensor([[0, 0, 0],
              [0, 0, 1],
              [0, 0, 0]])
    }
    - robot-handfree: Value[bool, axes=[], tdtype=torch.int64, tdshape=(), quantized]{tensor(1)}
    - clear: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([1, 1, 0])}
  objects: a - block, b - block, c - block
}
Applying: action::pick(?x=a)
New state: State{
  states:
    - robot-holding: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([1, 0, 0])}
    - on: Value[bool, axes=[?x, ?y], tdtype=torch.int64, tdshape=(3, 3), quantized]{
      tensor([[0, 0, 0],
              [0, 0, 1],
              [0, 0, 0]])
    }
    - robot-handfree: Value[bool, axes=[], tdtype=torch.int64, tdshape=(), quantized]{tensor(0)}
    - clear: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([0, 1, 0])}
  objects: a - block, b - block, c - block
}
Applying: action::place(?x=a, ?y=b)
New state: State{
  states:
    - robot-holding: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([0, 0, 0])}
    - on: Value[bool, axes=[?x, ?y], tdtype=torch.int64, tdshape=(3, 3), quantized]{
      tensor([[0, 1, 0],
              [0, 0, 1],
              [0, 0, 0]])
    }
    - robot-handfree: Value[bool, axes=[], tdtype=torch.int64, tdshape=(), quantized]{tensor(1)}
    - clear: Value[bool, axes=[?x], tdtype=torch.int64, tdshape=(3,), quantized]{tensor([1, 0, 0])}
  objects: a - block, b - block, c - block
}

© Copyright 2022, Jiayuan Mao.

Created using Sphinx 8.1.3.

Built with the PyData Sphinx Theme 0.15.4.