Overview of the SkillWrapper architecture.

SkillWrapper: Generative Predicate Invention for Skill Abstraction

TL;DR – This paper introduces SkillWrapper, a system for creating abstract representations of skills using foundation models.

November 2025 · Ziyi Yang, Benned Hedegaard, Ahmad Jaafar, Yichen Wei, Skye Thompson, Shreyas Sundara Raman, Haotian Fu, Stefanie Tellex, George Konidaris, David Paulius, Naman Shah
Example of simple block stacking task performed in our LLM-OLP paper

[ICRA-25] Bootstrapping Object-level Planning with Large Language Models

TL;DR – This paper formalizes the concept of object-level planning and discusses how this level of planning naturally integrates with large language models (LLMs).

May 2025 · David Paulius, Alejandro Agostini, Benedict Quartey, George Konidaris
Overview of Lang2LTL-2 system

[IROS-24] Lang2LTL-2: Grounding Spatiotemporal Navigation Commands Using Large Language and Vision-Language Models

TL;DR – Building on prior work (Lang2LTL - CoRL 2023), this paper introduces a modular system that enables robots to follow natural language commands with spatiotemporal referring expressions. This system leverages multi-modal foundation models as well as linear temporal logic.

September 2024 · Jason Xinyu Liu, Ankit Shah, George Konidaris, Stefanie Tellex, David Paulius
Overview of CAPE's reprompting methodology

[ICRA-24] CAPE: Corrective Actions from Precondition Errors using Large Language Models

TL;DR – In this paper, we introduce CAPE: an approach to correct errors encountered during robot plan execution. We exploit the ability of large language models to generate high-level plans and to reason about causes of errors.

January 2024 · Shreyas Sundara Raman, Vanya Cohen, Ifrah Idrees, Eric Rosen, Ray Mooney, Stefanie Tellex, David Paulius