Download PDFOpen PDF in browser

Common Metrics for Benchmarking Human-Machine Teams

EasyChair Preprint 14092

10 pagesDate: July 23, 2024

Abstract

In the evolving landscape of human-machine teams, benchmarking performance is crucial to evaluate and enhance collaborative efficacy. This paper presents a comprehensive review of common metrics used to benchmark human-machine teams, focusing on dimensions such as accuracy, efficiency, adaptability, robustness, and user satisfaction. We analyze traditional metrics like task completion time, error rates, and workload distribution, as well as advanced measures including situation awareness, trust, and cognitive load. By examining these metrics through various case studies and experimental setups, we highlight their strengths, limitations, and applicability across different domains. Our findings underscore the importance of multi-faceted evaluation frameworks that integrate both quantitative and qualitative measures to provide a holistic assessment of human-machine collaboration. This study aims to guide researchers and practitioners in selecting appropriate metrics for their specific contexts, thereby fostering the development of more effective and reliable human-machine teams.

Keyphrases: Adaptability Metrics, Benchmarking, Human-Machine Teams, Interaction Metrics, performance metrics

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:14092,
  author    = {John Owen},
  title     = {Common Metrics for Benchmarking Human-Machine Teams},
  howpublished = {EasyChair Preprint 14092},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser