GTA: A Benchmark for General Tool Agents

1 Shanghai Jiao Tong University, 2 Shanghai AI Laboratory
*Corresponding Authors

Some samples in the GTA benchmark. The user queries are human-designed, step-implicit, and settled in real-world scenarios. Multimodal context inputs are provided. Solving these queries is helpful for users, and complex for a LLM-based tool agent. The agent needs to use a combination of executable tools in perception, operation, logic, and creativity categories.

Abstract

In developing general-purpose agents, significant focus has been placed on integrating large language models (LLMs) with various tools. This poses a challenge to the tool-use capabilities of LLMs. However, there are evident gaps between existing tool evaluations and real-world scenarios. Current evaluations often use AI-generated queries, single-step tasks, dummy tools, and text-only inputs, which fail to reveal the agents' real-world problem-solving abilities effectively. To address this, we propose GTA, a benchmark for General Tool Agents, featuring three main aspects: (i) Real user queries : human-written queries with simple real-world objectives but implicit tool-use, requiring the LLM to reason the suitable tools and plan the solution steps. (ii) Real deployed tools : an evaluation platform equipped with tools across perception, operation, logic, and creativity categories to evaluate the agents' actual task execution performance. (iii) Real multimodal inputs : authentic image files, such as spatial scenes, web page screenshots, tables, code snippets, and printed/handwritten materials, used as the query contexts to align with real-world scenarios closely. We design 229 real-world tasks and executable tool chains to evaluate mainstream LLMs. Our findings show that real-world user queries are challenging for existing LLMs, with GPT-4 completing less than 50% of the tasks and most LLMs achieving below 25%. This evaluation reveals the bottlenecks in the tool-use capabilities of current LLMs in real-world scenarios, which provides future direction for the advancement of general-purpose tool agents.

GTA Design

GTA is a benchmark to evaluate the tool-use capability of LLM-based agents in real-world scenarios. It features three main aspects:

  • Real user queries. The benchmark contains 229 human-written queries with simple real-world objectives but implicit tool-use, requiring the LLM to reason the suitable tools and plan the solution steps.
  • Real deployed tools. GTA provides an evaluation platform equipped with tools across perception, operation, logic, and creativity categories to evaluate the agents' actual task execution performance.
  • Real multimodal inputs. Each query is attached with authentic image files, such as spatial scenes, web page screenshots, tables, code snippets, and printed/handwritten materials, used as the query contexts to align with real-world scenarios closely.
The questions in GTA are all created by human. The comparison of GTA queries with AI-generated queries is shown in the table. The steps and tool types for queries in ToolBench and m&m's are explicitly stated, as marked in red and blue. The queries in APIBench are simple, only containing one step. Our GTA's queries are both step-implicit and tool-implicit, while based on real-world scenarios.

Question Examples

Here are some question examples of GTA. All questions are tool-implicit, step-implicit and contains multimodal context inputs. They are easy-to-understand questions with clear goals, based on real-world scenarios, helpful for humans while complex for AI assistants to solve. The JSON format data example is available at Hugging Face.

Dataset Construction

Two steps are performed in the dataset construction pipeline.

  • Query construction. Initial exemplars and instruction documents are designed by experts and given to human annotators. Annotators brainstorm and design more samples based on the exemplars.
  • Tool chain construction. Annotators manually call the deployed tools to check the executability of each query in the query set. Then they annotate the ground truth tool chains for each query.

🏆 GTA Leaderboard




Notes

  1. Models labeled with 🔶 are API-based models, while others are open-source models.
  2. Refer to Github to evaluate models on GTA.

BibTeX

@misc{wang2024gtabenchmarkgeneraltool,
        title={GTA: A Benchmark for General Tool Agents}, 
        author={Jize Wang and Zerun Ma and Yining Li and Songyang Zhang and Cailian Chen and Kai Chen and Xinyi Le},
        year={2024},
        eprint={2407.08713},
        archivePrefix={arXiv},
        primaryClass={cs.CL},
        url={https://arxiv.org/abs/2407.08713}, 
  }