Causal Discovery with Language Models as Imperfect Experts
S. Long, A. Piché, V. Zantedeschi, T. Schuster, and A. Drouin. (2023)cite arxiv:2307.02390Comment: Peer reviewed and accepted for presentation at the Structured Probabilistic Inference & Generative Modeling (SPIGM) workshop at ICML 2023, Hawaii, USA.
Abstract
Understanding the causal relationships that underlie a system is a
fundamental prerequisite to accurate decision-making. In this work, we explore
how expert knowledge can be used to improve the data-driven identification of
causal graphs, beyond Markov equivalence classes. In doing so, we consider a
setting where we can query an expert about the orientation of causal
relationships between variables, but where the expert may provide erroneous
information. We propose strategies for amending such expert knowledge based on
consistency properties, e.g., acyclicity and conditional independencies in the
equivalence class. We then report a case study, on real data, where a large
language model is used as an imperfect expert.
Description
Causal Discovery with Language Models as Imperfect Experts
cite arxiv:2307.02390Comment: Peer reviewed and accepted for presentation at the Structured Probabilistic Inference & Generative Modeling (SPIGM) workshop at ICML 2023, Hawaii, USA
%0 Generic
%1 long2023causal
%A Long, Stephanie
%A Piché, Alexandre
%A Zantedeschi, Valentina
%A Schuster, Tibor
%A Drouin, Alexandre
%D 2023
%K causal gpt
%T Causal Discovery with Language Models as Imperfect Experts
%U http://arxiv.org/abs/2307.02390
%X Understanding the causal relationships that underlie a system is a
fundamental prerequisite to accurate decision-making. In this work, we explore
how expert knowledge can be used to improve the data-driven identification of
causal graphs, beyond Markov equivalence classes. In doing so, we consider a
setting where we can query an expert about the orientation of causal
relationships between variables, but where the expert may provide erroneous
information. We propose strategies for amending such expert knowledge based on
consistency properties, e.g., acyclicity and conditional independencies in the
equivalence class. We then report a case study, on real data, where a large
language model is used as an imperfect expert.
@misc{long2023causal,
abstract = {Understanding the causal relationships that underlie a system is a
fundamental prerequisite to accurate decision-making. In this work, we explore
how expert knowledge can be used to improve the data-driven identification of
causal graphs, beyond Markov equivalence classes. In doing so, we consider a
setting where we can query an expert about the orientation of causal
relationships between variables, but where the expert may provide erroneous
information. We propose strategies for amending such expert knowledge based on
consistency properties, e.g., acyclicity and conditional independencies in the
equivalence class. We then report a case study, on real data, where a large
language model is used as an imperfect expert.},
added-at = {2023-08-02T23:09:24.000+0200},
author = {Long, Stephanie and Piché, Alexandre and Zantedeschi, Valentina and Schuster, Tibor and Drouin, Alexandre},
biburl = {https://www.bibsonomy.org/bibtex/29360a5250a1e43703fcc87f5943f8b73/vincentqb},
description = {Causal Discovery with Language Models as Imperfect Experts},
interhash = {e54977887ed9a7fa6cc85a6883e49578},
intrahash = {9360a5250a1e43703fcc87f5943f8b73},
keywords = {causal gpt},
note = {cite arxiv:2307.02390Comment: Peer reviewed and accepted for presentation at the Structured Probabilistic Inference & Generative Modeling (SPIGM) workshop at ICML 2023, Hawaii, USA},
timestamp = {2023-08-02T23:09:24.000+0200},
title = {Causal Discovery with Language Models as Imperfect Experts},
url = {http://arxiv.org/abs/2307.02390},
year = 2023
}