Featured on LorenzCircle

This week the Lorenz Circle interviewed me and Kruti to write a feature on us.

Lorenz Fellows Swarnim and Kruti investigate LLMs and the Moral Disengagement Loophole

Large Language Models (LLMs) like ChatGPT and Gemini represent a fascinating duality. On one hand, they offer a powerful tool to simplify tasks and enhance our lives. On the other hand, their reliance on vast, unfiltered online data sources can spread misinformation, potentially causing ethical and legal issues.

Thus, model interpretability, or understanding how large deep learning models like LLMs work, is an active area of research. These models are trained on massive datasets, which can contain inaccuracies and biases. Filtering this information to ensure accurate and ethical outputs remains a hurdle for this nascent technology.

Two Lorenz Fellows, Swarnim Kalbande and Kruti Sutaria, recently published a study titled "Moral Disengagement in Large Language Models: A Prompt Engineering Study and Dataset" to explore how to mitigate these issues. The paper dives into…

Read full story on LorenzCircle.org

Next
Next

My 1 year transformation