This project presents a well formed question about whether LLMs (here, Llama) exhibit an effect known as the Moral Wiggle Room, ie., exploit ambiguity to act selfishly by avoiding information. This is relevant to safety and seems feasible to test. However I note a few limitations of the methods.
- Several ambiguities in the methods could be addressed. How were the prompt stimuli constructed and what were their general properties? What was manipulated across the conditions, how was that validated, and what was controlled between conditions? How was the scoring of ethicality performed and how was that validated? It should also be better described how many variants of each condition were presented so that the breadth and generalizability of the findings could be assessed.
- More content/domain cases than environmental audits should be tested to understand results are content specific or more general.
- statistical analyses should be presented to support the conclusions